java - TomEE chokes on too many @Asynchronous operations -
i using apache tomee 1.5.2 jax-rs, pretty out of box, predefined hsqldb.
the following simplified code. have rest-style interface receiving signals:
@stateless @path("signal") public class signalendpoint { @inject private signalstore store; @post public void post() { store.createsignal(); } }
receiving signal triggers lot of stuff. store create entity, fire asynchronous event.
public class signalstore { @persistencecontext private entitymanager em; @ejb private eventdispatcher dispatcher; @inject private event<signalentity> created; public void createsignal() { signalentity entity = new signalentity(); em.persist(entity); dispatcher.fire(created, entity); } }
the dispatcher simple, , merely exists make event handling asynchronous.
@stateless public class eventdispatcher { @asynchronous public <t> void fire(event<t> event, t parameter) { event.fire(parameter); } }
receiving event else, derives data signal, stores it, , fires asynchronous event:
@stateless public class deriveddatacreator { @persistencecontext private entitymanager em; @ejb private eventdispatcher dispatcher; @inject private event<deriveddataentity> created; @asynchronous public void onsignalentitycreated(@observes signalentity signalentity) { deriveddataentity entity = new deriveddataentity(signalentity); em.persist(entity); dispatcher.fire(created, entity); } }
reacting third layer of entity creation.
to summarize, have rest call, synchronously creates signalentity
, asynchronously triggers creation of deriveddataentity
, asynchronously triggers creation of third type of entity. works perfectly, , storage processes beautifully decoupled.
except when programmatically trigger lot (f.e. 1000) of signals in for-loop. depending on asynchronouspool
size, after processing signals (quite fast) in amount of half of size, application freezes minutes. resumes, process same amount of signals, quite fast, before freezing again.
i have been playing around asynchronouspool
settings last half hour. setting 2000, instance, make signals processed @ once, without freezes. system isn't sane either, after that. triggering 1000 signals, resulted in them being created allright, entire creation of derived data never happened.
now @ loss do. can of course rid of asynchronous events , implement sort of queue myself, thought point of ee container relieve me of such tedium. asynchronous ejb events should bring own queue mechanism. 1 should not freeze queue full.
any ideas?
update:
i have tried 1.6.0-snapshot. behaves little bit differently: still doesn't work, exception:
aug 01, 2013 3:12:31 pm org.apache.openejb.core.transaction.ejbtransactionutil handlesystemexception severe: ejbtransactionutil.handlesystemexception: fail allocate internal resource execute target task javax.ejb.ejbexception: fail allocate internal resource execute target task @ org.apache.openejb.async.asynchronouspool.invoke(asynchronouspool.java:81) @ org.apache.openejb.core.ivm.ejbobjectproxyhandler.businessmethod(ejbobjectproxyhandler.java:240) @ org.apache.openejb.core.ivm.ejbobjectproxyhandler._invoke(ejbobjectproxyhandler.java:86) @ org.apache.openejb.core.ivm.baseejbproxyhandler.invoke(baseejbproxyhandler.java:303) @ <<... code ...>> ... caused by: java.util.concurrent.rejectedexecutionexception: timeout waiting executor slot: waited 30 seconds @ org.apache.openejb.util.executor.offerrejectedexecutionhandler.rejectedexecution(offerrejectedexecutionhandler.java:55) @ java.util.concurrent.threadpoolexecutor.reject(threadpoolexecutor.java:821) @ java.util.concurrent.threadpoolexecutor.execute(threadpoolexecutor.java:1372) @ java.util.concurrent.abstractexecutorservice.submit(abstractexecutorservice.java:132) @ org.apache.openejb.async.asynchronouspool.invoke(asynchronouspool.java:75) ... 38 more
it though tomee not queueing of operations. if no thread free process in moment of call, tough luck. surely, cannot intended..?
update 2:
okay, seem have stumbled upon semi-solution: setting asynchronouspool.queuesize
property maxint solves freeze. questions remain: why queuesize limited in first place, and, more worryingly: why block entire application? if queue full, blocks, task taken it, should pop in, right? queue appears blocked until empty again.
update 3:
for wants have go: http://github.com/jandoerrenhaus/tomeefreezetestcase
update 4:
as turns out, increasing queue size not solve problem, merely delays it. problem remains same: many asynchronous operations @ once, , tomee chockes bad, cannot undeploy application on termination anymore.
so far, diagnosis task cleanup not work properly. tasks small , fast (see test case on github). afraid openjpa or hsqldb slowing down on many concurrent calls, commented out em.persist
calls, , problem remained same. if tasks quite small , fast, still manage block out tomee bad not further task in after 30 seconds (javax.ejb.ejbexception: fail allocate internal resource execute target task
), imagine completed tasks linger, clogging pipe, speak.
help :(
basically blockingqueues use locks ensure consistency of data , avoid data loss, in highly concurrent environment reject lot of tasks (your case).
you can play on trunk rejectedexecutionhandler implementation retry offer task. 1 implementation can be:
new rejectedexecutionhandler() { @override public void rejectedexecution(final runnable r, final threadpoolexecutor executor) { (int = 0; < 10; i++) { if (executor.getqueue().offer(r)) { return; } try { thread.sleep(50); } catch (final interruptedexception e) { // no-op } } throw new rejectedexecutionexception(); } }
it works better random sleep (between min , max).
the idea basically: if queue full, wait short time reduce concurrency.
Comments
Post a Comment