| 2012-11-12 17:34:57 utc | northox | Hello guys! |
| 2012-11-12 17:41:51 utc | northox | I have a question about that is not entirely related to Ruote altough the code in question does use Ruote in the backgroud. My question is about Rufus-scheduler: anyone tried to implement persistency over the schedules? |
| 2012-11-12 18:01:04 utc | northox | I'll send an message to the mailing list. |
| 2012-11-12 18:01:23 utc | northox | s/an/a/ |
| 2012-11-12 21:24:09 utc | mburnett | hi jmettraux |
| 2012-11-12 21:24:23 utc | jmettraux | mburnett: hello, how are you doing? |
| 2012-11-12 21:24:38 utc | mburnett | pretty good, how are you? |
| 2012-11-12 21:24:54 utc | jmettraux | good as well |
| 2012-11-12 21:25:09 utc | jmettraux | I looked at the citerator quadratic and found the cause (I think) |
| 2012-11-12 21:25:14 utc | mburnett | qwesome |
| 2012-11-12 21:25:18 utc | mburnett | a* |
| 2012-11-12 21:25:47 utc | mburnett | i was just trying to look at it with a profiler |
| 2012-11-12 21:25:59 utc | jmettraux | in fact, it's keeping all the workitems in an array in the citerator and only does the merging when the citerator is deemed over (all children replied or another over cause) |
| 2012-11-12 21:26:17 utc | jmettraux | so it's taking up a quite a large space |
| 2012-11-12 21:26:22 utc | mburnett | aha |
| 2012-11-12 21:26:44 utc | jmettraux | this way of doing things is only useful for certain combinations of merge/merge_type |
| 2012-11-12 21:26:47 utc | mburnett | and is that array communicated between workers in the case of multiple workers at each step? |
| 2012-11-12 21:27:00 utc | jmettraux | yes, copied over and over |
| 2012-11-12 21:27:02 utc | mburnett | oh wow |
| 2012-11-12 21:27:07 utc | mburnett | sounds like a find :D |
| 2012-11-12 21:27:35 utc | jmettraux | well, it works OK with small Ns |
| 2012-11-12 21:27:43 utc | jmettraux | (few branches) |
| 2012-11-12 21:28:08 utc | mburnett | a group of my colleagues have started working on a mostly ground up solution using resque/sidekiq and are getting great performance...though perhaps not the reliability yet |
| 2012-11-12 21:28:19 utc | jmettraux | perfect |
| 2012-11-12 21:28:23 utc | mburnett | they are hoping to demo something impressive monday |
| 2012-11-12 21:28:53 utc | jmettraux | sorry not to be able to match resque (let's not mention sidekiq) |
| 2012-11-12 21:29:24 utc | mburnett | ha |
| 2012-11-12 21:29:31 utc | mburnett | well, that's ok |
| 2012-11-12 21:29:44 utc | mburnett | i will be happy if it is linear to ~10k jobs |
| 2012-11-12 21:29:56 utc | jmettraux | the fix is to merge as soon as possible so as to keep the footprint minimal (1 or 2 workitems instead of the whole 2000) |
| 2012-11-12 21:29:59 utc | mburnett | i think our use case is a little unusual |
| 2012-11-12 21:30:06 utc | mburnett | cool |
| 2012-11-12 21:30:45 utc | jmettraux | so what kind of merge/merge_type do you need for your concurrent iterators ? |
| 2012-11-12 21:31:16 utc | mburnett | good question |
| 2012-11-12 21:31:42 utc | mburnett | there are some use cases where we just want to put an item from each participant in an array |
| 2012-11-12 21:32:01 utc | mburnett | in some cases order must be preserved |
| 2012-11-12 21:32:05 utc | mburnett | so that's annoying :o |
| 2012-11-12 21:32:17 utc | jmettraux | reply order or apply order ? |
| 2012-11-12 21:32:33 utc | mburnett | ah so imagine we are citerating over a list of ids |
| 2012-11-12 21:33:03 utc | mburnett | each participant in the citer generates some output id, say, and we want the input ids to be associated with the output ids |
| 2012-11-12 21:33:24 utc | jmettraux | is the output heavy? |
| 2012-11-12 21:33:37 utc | mburnett | no |
| 2012-11-12 21:33:43 utc | jmettraux | cool |
| 2012-11-12 21:33:45 utc | mburnett | usually an integer or uuid |
| 2012-11-12 21:34:08 utc | mburnett | we have a very few cases where we currently serialize an object, but not for our large scaling processes |
| 2012-11-12 21:34:23 utc | mburnett | to my knowledge |
| 2012-11-12 21:34:49 utc | jmettraux | always thought people would come up with some kind of hadoop hooked participant |
| 2012-11-12 21:35:09 utc | mburnett | i suspect that will be something we eventually do |
| 2012-11-12 21:35:17 utc | mburnett | hadoop has mixed support here |
| 2012-11-12 21:35:27 utc | mburnett | it makes sense for some of our processes, but maybe not for others |
| 2012-11-12 21:35:32 utc | jmettraux | ok |
| 2012-11-12 21:35:49 utc | mburnett | it will be cool for it to be an amqp service |
| 2012-11-12 21:36:02 utc | mburnett | and for it to be able to use ruote to orchestrate other processes |
| 2012-11-12 23:02:30 utc | jmettraux | MCamou: hello, you're still up, sorry, I was planning to respond to your email tomorrow (for your european morning) |
| 2012-11-13 00:08:59 utc | MCamou | jmettraux: Hi, saw your reply. Just watching a bit of TV and catching up on work. |
| 2012-11-13 00:09:25 utc | jmettraux | ok, take care |
| 2012-11-13 00:09:58 utc | MCamou | Things are working now! :) |
| 2012-11-13 00:10:21 utc | jmettraux | weird thing, is it JRuby? |
| 2012-11-13 00:11:12 utc | MCamou | Thakk you very much! |
| 2012-11-13 00:12:29 utc | MCamou | You take care too... time for bed now |
| 2012-11-13 00:16:19 utc | MCamou | Sorry…had to restart my IRC client which was becoming awfully sluggish |
| 2012-11-13 00:16:35 utc | MCamou | I think it might have been user (programmer) error :) |
| 2012-11-13 00:16:48 utc | MCamou | a BKC failure |
| 2012-11-13 00:16:57 utc | MCamou | but I'll look into it tomorrow |
| 2012-11-13 00:17:01 utc | jmettraux | ok |
| 2012-11-13 00:17:11 utc | jmettraux | have a good night! |
| 2012-11-13 00:17:13 utc | MCamou | anyway… take care and have a great day! |
| 2012-11-13 00:17:20 utc | jmettraux | :-) |