| 2011-07-01 07:39:56 utc | jmettraux | tosch_le: hello |
| 2011-07-01 07:40:16 utc | tosch_le | hello! |
| 2011-07-01 07:41:10 utc | tosch_le | started working on a fsstorage-only worker yesterday, but i'm still unsure about the way to go. |
| 2011-07-01 07:41:25 utc | jmettraux | the other day, I wanted your opinion on relaxing the dependencies for ruote-kit |
| 2011-07-01 07:41:44 utc | jmettraux | fs_storage: sweet |
| 2011-07-01 07:42:24 utc | tosch_le | had a look at https://github.com/ttilley/fssm |
| 2011-07-01 07:42:48 utc | jmettraux | looks ideal |
| 2011-07-01 07:43:34 utc | tosch_le | my only worry is that i'll need two threads: one for fssm and one for checking the schedules. |
| 2011-07-01 07:43:55 utc | tosch_le | there might be issues when there's no file locking mechanism |
| 2011-07-01 07:44:14 utc | tosch_le | or not? |
| 2011-07-01 07:44:15 utc | jmettraux | what about having 1 thread that receives msgs and schedules events ? |
| 2011-07-01 07:44:34 utc | jmettraux | (it would have to keep all the schedules in memory though) |
| 2011-07-01 07:45:34 utc | tosch_le | i can't change the loop fssm provides as it's dependent on the backend fssm uses, so that'll be hard |
| 2011-07-01 07:45:47 utc | tosch_le | i thought about using https://github.com/mockko/em-dir-watcher |
| 2011-07-01 07:45:59 utc | tosch_le | and using em for the schedules, too |
| 2011-07-01 07:46:15 utc | jmettraux | you can watch a directory ? |
| 2011-07-01 07:46:33 utc | tosch_le | using em-dir-watcher, yes. |
| 2011-07-01 07:46:56 utc | jmettraux | FSSS |
| 2011-07-01 07:47:44 utc | jmettraux | glob **/msg-* |
| 2011-07-01 07:47:51 utc | jmettraux | ah yes |
| 2011-07-01 07:47:53 utc | jmettraux | :-( |
| 2011-07-01 07:48:26 utc | tosch_le | pardon, what does fsss mean? |
| 2011-07-01 07:48:40 utc | jmettraux | sorry, was looking at fssm doc |
| 2011-07-01 07:48:57 utc | jmettraux | we could move msgs and schedules in a dir |
| 2011-07-01 07:49:03 utc | jmettraux | so that we could watch that dir |
| 2011-07-01 07:49:11 utc | jmettraux | and not care about other changes |
| 2011-07-01 07:49:32 utc | tosch_le | they are in separate dirs and that's lovely and sufficient. |
| 2011-07-01 07:50:15 utc | jmettraux | but one watch thread is better than two ? |
| 2011-07-01 07:50:40 utc | tosch_le | one watch thread should be able to watch more than one dir |
| 2011-07-01 07:51:12 utc | jmettraux | so life is good |
| 2011-07-01 07:51:33 utc | tosch_le | but that doesn't solve the schedules problem: when it's time for running a schedule, no change in the fs is made and so no event is fired |
| 2011-07-01 07:51:58 utc | jmettraux | that's why you'd be forced to keep all the schedules in memory |
| 2011-07-01 07:52:02 utc | tosch_le | yes |
| 2011-07-01 07:52:09 utc | jmettraux | and update that table when new schedules come in |
| 2011-07-01 07:52:12 utc | jmettraux | (or go out) |
| 2011-07-01 07:52:39 utc | jmettraux | when the time comes, you reserve the schedule, trigger it and life is good |
| 2011-07-01 07:53:15 utc | tosch_le | yeah, but one thread will have to make sure there is a trigger when the time comes, having the schedules in memory or not. |
| 2011-07-01 07:53:32 utc | jmettraux | ah yes |
| 2011-07-01 07:53:37 utc | jmettraux | two threads |
| 2011-07-01 07:54:32 utc | jmettraux | schedules are tricky |
| 2011-07-01 07:54:52 utc | tosch_le | and i think i was wrong about the file locking problem: schedules are not msgs, when one thread triggers a schedule, there should be no conflict with any msg being processed by the other thread |
| 2011-07-01 07:55:10 utc | jmettraux | ok |
| 2011-07-01 07:56:06 utc | tosch_le | schedules are tricky, yes, that's why using eventmachine was appealing to me: i'd have events for fs changes and events for schedules, too. |
| 2011-07-01 07:56:45 utc | jmettraux | sounds good |
| 2011-07-01 07:57:36 utc | tosch_le | and fssm has the drawback that it lacks a proper #stop method. it just waits for an interrupt and that's somewhat annoying. |
| 2011-07-01 07:58:09 utc | jmettraux | can't remove the trap ? |
| 2011-07-01 07:59:01 utc | tosch_le | the run loop is something like "loop while 1 rescue Interrupt" |
| 2011-07-01 07:59:24 utc | tosch_le | no way to stop it besides raising an interrupt |
| 2011-07-01 08:03:09 utc | tosch_le | anyway, i think i'll try the eventmachine way: i'll watch the msgs dir and process msgs when they arrive. i'll use EM::Timer for the schedules and create those timers through watching the scheds dir for changes. |
| 2011-07-01 08:03:15 utc | jmettraux | the em watcher works on linux + osx ? |
| 2011-07-01 08:03:16 utc | tosch_le | or some way like that. |
| 2011-07-01 08:03:42 utc | tosch_le | windows is supported, too. |
| 2011-07-01 08:03:58 utc | tosch_le | ( via win32-changenotify) |
| 2011-07-01 08:04:00 utc | jmettraux | schedules : once per minute ? |
| 2011-07-01 08:04:21 utc | tosch_le | no, instantly when it's time for them. |
| 2011-07-01 08:04:38 utc | tosch_le | they'll be in memory. |
| 2011-07-01 08:04:41 utc | jmettraux | so it's limited to 1 worker |
| 2011-07-01 08:04:50 utc | jmettraux | ah ok |
| 2011-07-01 08:05:29 utc | jmettraux | sounds nice |
| 2011-07-01 08:05:46 utc | tosch_le | let's see where i get with this. |
| 2011-07-01 08:06:15 utc | tosch_le | anyway, back to ruote-kit. which dependencies do you want to remove/change? |
| 2011-07-01 08:06:19 utc | jmettraux | don't hesitate to update the doc on the go : http://ruote.rubyforge.org/implementing_a_storage.html ;-) |
| 2011-07-01 08:06:44 utc | jmettraux | ruote-kit ; https://github.com/kennethkalmer/ruote-kit/network I've noticed that forkers were motivated by haml versions and co |
| 2011-07-01 08:07:09 utc | tosch_le | i don't implement a storage ;-) |
| 2011-07-01 08:08:58 utc | jmettraux | FsWatchStorage |
| 2011-07-01 08:09:03 utc | jmettraux | BayWatchStorage |
| 2011-07-01 08:09:34 utc | tosch_le | ruote-kit: i think it's fine to use >= more often. since we have Gemfile.lock now, we can say that we've tested on version x.y.z when anything breaks |
| 2011-07-01 08:10:22 utc | tosch_le | FsStorageWatcherWorker |
| 2011-07-01 08:10:45 utc | jmettraux | +1 |
| 2011-07-01 08:10:59 utc | jmettraux | so it's a worker |
| 2011-07-01 08:11:19 utc | tosch_le | sure it is. the storage is dumb, you remember? ;-) |
| 2011-07-01 08:11:26 utc | jmettraux | that raises the question of : should we re-implement the worker thing |
| 2011-07-01 08:11:49 utc | tosch_le | i'll create a subclass of Worker. |
| 2011-07-01 08:11:51 utc | jmettraux | the worker is dumb currently : 1 polling thread |
| 2011-07-01 08:12:48 utc | tosch_le | dumb was the wrong word. the storage is only the persistence layer, it doesn't process any messages or schedules. |
| 2011-07-01 08:14:08 utc | tosch_le | i don't think it's necessary to re-implement the worker, it's fine the way it is. and you designed ruote 2.1 in a way that it's really easy to use another worker. |
| 2011-07-01 08:14:52 utc | jmettraux | ok |
| 2011-07-01 08:14:53 utc | tosch_le | the only thing that would be great was when i could run ruote's functional tests against the new worker instead of Ruote::Worker |
| 2011-07-01 08:15:27 utc | tosch_le | like you can change the storage class to use |
| 2011-07-01 08:16:10 utc | jmettraux | true |
| 2011-07-01 08:18:42 utc | tosch_le | and i'd like to have access to the path of the storage directory fsstorage uses. right now, i'd have to use @storage.get_instance_variable(:cloche).get_instance_variable(:path)@ |
| 2011-07-01 08:19:55 utc | jmettraux | we can keep track of the cloche dir as an instance variable in FsStorage |
| 2011-07-01 08:20:18 utc | tosch_le | that would be great. |
| 2011-07-01 08:20:19 utc | jmettraux | or add a #path method to FsStorage that asks cloche for the path |
| 2011-07-01 08:20:57 utc | tosch_le | anything like that would be appreciated. |
| 2011-07-01 08:21:03 utc | jmettraux | let me add something quickly then |
| 2011-07-01 08:21:17 utc | tosch_le | thanks! |
| 2011-07-01 08:22:22 utc | jmettraux | https://github.com/jmettraux/ruote/commit/00a7ff0a4045e9ea20ecfc71a8ab98408286bfdd |
| 2011-07-01 08:23:56 utc | tosch_le | thanks again. |
| 2011-07-01 08:25:08 utc | jmettraux | you're welcome |
| 2011-07-01 08:30:25 utc | jmettraux | I'd put the file watching functionality in a storage |
| 2011-07-01 08:30:44 utc | jmettraux | because it's fs dependent |
| 2011-07-01 08:31:09 utc | jmettraux | but that would still require a new worker |
| 2011-07-01 08:31:26 utc | jmettraux | pull vs push |
| 2011-07-01 08:31:46 utc | tosch_le | i'd raise an exception in the initializer: complain when the given storage isn't a fsstorage |
| 2011-07-01 08:32:35 utc | tosch_le | it's a mixed thing; it's a worker that depends on a special storage |
| 2011-07-01 08:32:37 utc | jmettraux | wait, we still want to read the storage when a worker is not present |
| 2011-07-01 08:32:45 utc | tosch_le | well, not special, but well known |
| 2011-07-01 08:32:58 utc | jmettraux | ok, makes sense |
| 2011-07-01 08:33:18 utc | jmettraux | such a worker, I would need it for ruote-redis as well |
| 2011-07-01 08:33:29 utc | jmettraux | even for ruote-couch |
| 2011-07-01 08:33:55 utc | tosch_le | you'd need push functionality in redis or couchdb |
| 2011-07-01 08:34:08 utc | jmettraux | it's already in there blpop |
| 2011-07-01 08:34:10 utc | jmettraux | and co |
| 2011-07-01 08:34:34 utc | tosch_le | ah, never dived that deep into redis. sounds amazing. |
| 2011-07-01 08:35:25 utc | jmettraux | maybe if the storage presents a hook point to the worker, it could switch to listen instead of poll |
| 2011-07-01 08:35:44 utc | jmettraux | sorry, I'm trying to scale your idea to all the storages |
| 2011-07-01 08:38:28 utc | jmettraux | such a worker could be used by the 3 storages |
| 2011-07-01 08:38:35 utc | jmettraux | the three stooges |
| 2011-07-01 08:39:10 utc | tosch_le | it's not that easy with multi-worker-environments: to which worker shall the storage push? |
| 2011-07-01 08:39:58 utc | jmettraux | all |
| 2011-07-01 08:40:18 utc | jmettraux | it depends |
| 2011-07-01 08:40:27 utc | jmettraux | with redis, blop gives to 1 consumer |
| 2011-07-01 08:40:40 utc | jmettraux | couch would notify everybody (http long poll) |
| 2011-07-01 08:41:14 utc | jmettraux | your fsevent system would notify all its workers |
| 2011-07-01 08:42:19 utc | tosch_le | hmm. each storage may provide a worker backend. if there is none, the default (polling) backend is used. |
| 2011-07-01 08:43:14 utc | tosch_le | the worker could be em-based by default for making it easy to write other backends |
| 2011-07-01 08:43:23 utc | jmettraux | :^( |
| 2011-07-01 08:43:42 utc | tosch_le | evented worker. but em as dependency is a no-go, i suppose. |
| 2011-07-01 08:44:22 utc | jmettraux | em is great, but I try to avoid depending on it |
| 2011-07-01 08:45:05 utc | jmettraux | your special fs storage extension could be em based |
| 2011-07-01 08:45:35 utc | jmettraux | the new special worker would just place hooks into the special storages (if the hook points are provided) |
| 2011-07-01 08:45:50 utc | jmettraux | no hook points, back to polling |
| 2011-07-01 08:46:14 utc | jmettraux | you would be totally free to use EM in your storage impl |
| 2011-07-01 08:46:35 utc | jmettraux | ACTION emits coffee |
| 2011-07-01 08:46:53 utc | tosch_le | ACTION emits Vita Cola |
| 2011-07-01 08:46:57 utc | tosch_le | ;-) |
| 2011-07-01 08:47:57 utc | jmettraux | thanks ! |
| 2011-07-01 08:47:58 utc | tosch_le | hmm, the storage could reply to a "worker_class" method |
| 2011-07-01 08:48:13 utc | tosch_le | or something like that. |
| 2011-07-01 08:49:06 utc | jmettraux | strorage.respond_to?(:on_msg) ? storage.on_msg(self) : poll |
| 2011-07-01 08:49:36 utc | jmettraux | "worker_class" sounds sovietic |
| 2011-07-01 08:49:43 utc | jmettraux | ;-) |
| 2011-07-01 08:50:35 utc | tosch_le | rofl |
| 2011-07-01 08:51:16 utc | tosch_le | what would storage.on_msg do? |
| 2011-07-01 08:51:45 utc | jmettraux | more like |
| 2011-07-01 08:51:46 utc | tosch_le | it would push the message to the worker |
| 2011-07-01 08:51:49 utc | tosch_le | ? |
| 2011-07-01 08:51:55 utc | jmettraux | storage.on_msg do |msg| |
| 2011-07-01 08:51:59 utc | jmettraux | process(msg) |
| 2011-07-01 08:52:01 utc | jmettraux | end |
| 2011-07-01 08:52:23 utc | jmettraux | it would register a block in the storage |
| 2011-07-01 08:52:37 utc | jmettraux | the storage would call that block each time there is a new msg |
| 2011-07-01 08:52:46 utc | jmettraux | something like that |
| 2011-07-01 08:53:15 utc | tosch_le | that way you may tell the storage to push messages somewhere. the receiver side is missing |
| 2011-07-01 08:53:44 utc | jmettraux | the owner of the block is the receiver |
| 2011-07-01 08:56:54 utc | jmettraux | worker says "hey, you the storage, please notify me of new msgs, simply execute that block that I give you with the msg as argument" |
| 2011-07-01 08:57:55 utc | tosch_le | but you'll need different implementations on worker side for each storage |
| 2011-07-01 08:58:55 utc | jmettraux | not if the kinky details of file watching / couchdb long polling / redis blopooping are dealt with the storage |
| 2011-07-01 08:59:50 utc | jmettraux | ok, I have to go for now |
| 2011-07-01 09:00:12 utc | jmettraux | I can work on the worker if you want |
| 2011-07-01 09:00:17 utc | tosch_le | thanks for the great discussion! |
| 2011-07-01 09:00:24 utc | jmettraux | +1 excellent ! |
| 2011-07-01 09:00:33 utc | jmettraux | you could do the FsStorage++ |
| 2011-07-01 09:00:38 utc | jmettraux | with Em watching |
| 2011-07-01 09:00:43 utc | tosch_le | :-) |
| 2011-07-01 09:00:57 utc | tosch_le | thrilled to see your idea in code |
| 2011-07-01 09:01:41 utc | jmettraux | thanks for the excellent idea (fsevent and co) |