Open in new window / Try shogun cloud
--- Log opened Mon Jul 30 00:00:17 2012
-!- emrecelikten [~emre@] has left #shogun []00:33
-!- emrecelikten [~emre@] has joined #shogun00:33
-!- needsch [] has quit [Ping timeout: 246 seconds]00:41
-!- emrecelikten [~emre@] has quit [Quit: Leaving.]00:57
-!- blackburn [~blackburn@] has quit [Quit: Leaving.]02:55
-!- K0stIa [] has joined #shogun06:39
-!- gsomix [~gsomix@] has joined #shogun07:13
-!- n4nd0 [] has joined #shogun07:14
-!- K0stIa [] has quit [Quit: Leaving.]07:20
-!- K0stIa1 [] has joined #shogun07:20
-!- zxtx [] has joined #shogun07:44
-!- K0stIa1 [] has left #shogun []07:53
-!- K0stIa1 [] has joined #shogun07:55
-!- K0stIa1 [] has left #shogun []07:55
-!- gsomix [~gsomix@] has quit [Ping timeout: 264 seconds]08:06
-!- uricamic [~uricamic@2001:718:2:1634:6594:2095:459a:894f] has joined #shogun08:49
-!- uricamic [~uricamic@2001:718:2:1634:6594:2095:459a:894f] has left #shogun []08:50
-!- uricamic [~uricamic@2001:718:2:1634:6594:2095:459a:894f] has joined #shogun08:50
-!- K0stIa [~kostia@2001:718:2:1634:76f0:6dff:fe92:4013] has joined #shogun08:52
n4nd0sonney2k: around?09:26
-!- pluskid [~pluskid@] has joined #shogun09:28
-!- K0stIa [~kostia@2001:718:2:1634:76f0:6dff:fe92:4013] has left #shogun []09:49
-!- K0stIa [~kostia@2001:718:2:1634:76f0:6dff:fe92:4013] has joined #shogun09:58
-!- gsomix [~gsomix@] has joined #shogun10:00
K0stIaHi, all! I installed shogun for python and Gaussian, GMM are missing in my shogun.Distributions. My linux distribution is ArchLinux, I'm using python2.7.10:11
K0stIacan anyone help me with this issue..10:19
n4nd0K0stIa: hi10:21
n4nd0K0stIa: did you install from packages or from source?10:21
K0stIan4nd0: Hi! from source.10:22
K0stIaI took it from github10:23
pluskidK0stIa: did you install LaPack?10:23
n4nd0K0stIa: let me check if Gaussian and GMM have some dependencias that you might not be using10:23
pluskidK0stIa: AFAIK, lapack in Arch do not have header files, which is needed for shogun to compile some of its components10:24
pluskidK0stIa: you will have to either install lapack manually or you can use the package atlas-lapack from AUR10:24
K0stIapluskid: I'm using MKL.... and btw: still shogun didn't find it despite telling it implicitly through --libs, --inludes.10:26
pluskidK0stIa: see above, did you install lapack?10:27
K0stIapluskid: no, I didn't.10:28
K0stIapluskid: MKL is not enough ?10:28
pluskidK0stIa: I'm sorry, what do you mean by MKL?10:28
pluskiddid ./configure report that it has detected a valid lapack?10:29
K0stIapluskid: Math Kernel Library10:29
K0stIapluskid: as I said ./configure didn't find it...10:29
K0stIapluskid: however I built numpy with MKL successfully10:30
pluskidK0stIa: then I'm not sure10:31
pluskidK0stIa: in the ./configure script, there's a section for checking Intel MKL10:31
pluskidK0stIa: you might have to modify that to suit your situation :-/ I'm not familiar with this10:31
pluskidsonney2k: maybe you can help with this10:32
n4nd0K0stIa: so MKL can be a substitute of lapack for shogun?10:33
n4nd0I didn't know about that ...10:33
pluskidn4nd0: there's a section in ./configure that checks many provider of lapack, atlas is one provider (that I'm using), MKL seems to be another10:35
K0stIan4nd0: as far as I know lapack is just some interface and MKL has also realization of it.. at least I built my numpy with it successfully and found it to be much faster then lapack library taken from AUR10:35
pluskidbut not sure why it fails10:35
K0stIan4nd0: i.e. atlas10:35
n4nd0aham, I had no idea about that10:36
n4nd0pluskid, K0stIa: thanks for the info10:37
K0stIan4nd0: you are wellcome10:37
pluskidmaybe blackburn can help with this when he shows up10:37
K0stIapluskid:  ok, I will wait for him... and I can try on different machine.. I have another lapack there...10:38
-!- gsomix [~gsomix@] has quit [Ping timeout: 250 seconds]10:46
-!- blackburn [~blackburn@] has joined #shogun11:01
blackburnK0stIa: hey still struggling with MKL?11:03
-!- yoo [2eda6d52@gateway/web/freenode/ip.] has joined #shogun11:05
K0stIablackburn:  sort of...11:05
yoohi all11:05
blackburnK0stIa: did you add --includes and --libs flags?11:06
K0stIablackburn: yes I did11:06
blackburnstill not detected?11:06
K0stIablackburn: yes11:06
blackburnokay when what did you add in there?11:07
K0stIablackburn: something like ./configure --interfaces=python_modular \11:09
K0stIa    --prefix=/usr \11:09
K0stIa    --libs=/opt/intel/mkl/lib/intel64 \11:09
K0stIa    --includes=/opt/intel/mkl/include11:09
koen-shogunyou should check configure.log , on what happened in the MKL checks11:09
blackburnkoen-shogun: did you ever link shogun on mkl?11:10
koen-shogunit probably fails to compile (or to link) there11:10
koen-shogunno, but I had to fix the same issues for ATLAS on my machine11:10
K0stIaMKL is missing in my log...11:10
koen-shogungo to the "============ Checking for Intel MKL support ============"  in configure.log11:10
koen-shogunand copy the text underneath that :)11:11
koen-shogununtil the part where it says "Result is: no11:11
K0stIakoen-shogun: ok11:11
koen-shogunoh, and which version of MKL do you have?11:12
K0stIa============ Checking for Intel MKL support ============11:14
K0stIa#include <mkl_cblas.h>11:14
K0stIavoid dpotrf(char* uplo, int* n, double* a, int* lda, int* info);11:14
K0stIaint main(int argc, char** argv)11:14
-!- K0stIa [~kostia@2001:718:2:1634:76f0:6dff:fe92:4013] has quit [Excess Flood]11:14
-!- K0stIa [~kostia@2001:718:2:1634:76f0:6dff:fe92:4013] has joined #shogun11:14
pluskidK0stIa: use some external service to paste large chunk of code :)11:15
K0stIapluskid: for instance ? :)11:15
pluskidlike gist from github11:15
K0stIakoen-shogun: pluskid:
K0stIakoen-shogun: about version of mkl it's 2011_sp1.9.29311:18
koen-shogunit finds your include files ok11:19
koen-shogunbut it fails on linking11:19
K0stIakoen-shogun: you mean I have to change --libs parameter?11:26
-!- n4nd0 [] has quit [Quit: leaving]11:28
blackburnK0stIa: I actually expect something is wrong with -l*11:28
K0stIablackburn: then it's not my problem, right ?11:30
koen-shogunwell, but you can fix it11:30
koen-shogunthe Intel MKL changes the libraries to link against every other version11:30
blackburnK0stIa: can you paste somewhere contents of /opt/intel/mkl/lib/intel64 please?11:30
K0stIablackburn: ok11:30
koen-shogunfor example, I have a link line (for some other program) that looks like -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core11:31
koen-shogunoh, and the complete needed link line then needs "-Wl,--start-group -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -Wl,--end-group"11:32
blackburnkoen-shogun: I expect thing we have for mkl now is outdated11:32
koen-shogunif you edit configure, go to "echocheck "Intel MKL support""11:33
koen-shogunand replace  -lmkl -lguide -lmkl_lapack twice with for example my link line, that might work11:33
K0stIakoen-shogun: thanks, I will try...11:34
blackburnK0stIa: lines 3168,3174,3176,318211:34
blackburntry to replace it with11:34
blackburn-Wl,--start-group -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -Wl,--end-group11:34
blackburnand run configure11:34
K0stIablackburn: still the same :(11:37
koen-shogunwhat does configure.log say now?11:39
blackburnthat's openmp missing I guess11:42
blackburnK0stIa: either replace intel_thread with sequential11:43
blackburnor add -fopenmp11:43
koen-shogundo you also use the intel compiler?11:44
koen-shogunbecause then it's -openmp11:44
K0stIakoen-shogun: no, gcc11:44
koen-shogunok, and do you have a iomp5 library somewhere? then you'd need -liomp5 (plus an additional --libs path)11:45
koen-shogunmaybe sequential is the easiest solution11:45
K0stIaok, I have to go now... I will come back in 30-40 minutes, then I will inform you about my situation. koen-shogun, blackburn thanks for helping11:47
blackburnK0stIa: ok I will  be around11:48
CIA-18shogun: Sergey Lisitsyn master * r2fd15a1 / src/configure : Updated MKL linking flags -
-!- needsch [] has joined #shogun12:10
-!- emrecelikten [~emre@] has joined #shogun12:32
K0stIablackburn: I see someone updated ./configure, so I updated shogun from github, and tried to install it, it failed on make command. this is what it says:
blackburnK0stIa: well at least it detected mkl :)13:08
K0stIayes, that's true :)13:08
blackburnK0stIa: can you paste /opt/intel/mkl/include/mkl_cblas.h somewhere?13:09
K0stIablackburn: yes13:10
blackburnK0stIa: please update and try again13:14
CIA-18shogun: Sergey Lisitsyn master * rda74d42 / src/shogun/mathematics/lapack.h : Removed wrong enum keywords in lapack wrappers -
K0stIablackburn: ok13:15
-!- n4nd0 [] has joined #shogun13:23
blackburnK0stIa: so did that help13:29
blackburnokay got it13:30
CIA-18shogun: Sergey Lisitsyn master * r77433fe / src/shogun/mathematics/lapack.cpp : Removed one more wrong enum in lapack -
blackburnK0stIa: hopefully should work now13:31
K0stIablackburn: ok, I'll check13:31
-!- pluskid [~pluskid@] has quit [Ping timeout: 272 seconds]13:52
-!- pluskid [~pluskid@] has joined #shogun14:14
-!- yoo [2eda6d52@gateway/web/freenode/ip.] has quit [Quit: Page closed]14:21
n4nd0yaaay!!! \o/14:27
-!- yoo [2eda6d52@gateway/web/freenode/ip.] has joined #shogun14:28
n4nd0I think I can say with confidence that my hm-svm implementation is correct!14:28
yoon4nd0: where can we find the implementation of hm-svm with shogun ?14:34
n4nd0yoo: it is in my branch right now14:35
n4nd0yoo: I have to polished some things - lot of debugging code currently14:35
n4nd0yoo: do you want to use it?14:35
yoon4nd0: it seems interesting :)14:35
n4nd0yoo: I will probably prepare a pull request today14:36
yoon4nd0: I wanted to code one myself, then seeing yours would be cool. I am not enough still good with shogun anyway ..14:36
yoon4nd0: nice:14:37
n4nd0yoo: so have you experience with hm-svm?14:39
yooyea but code is creepy .. looks like hybrid svm-hmm , I thought to rewrite something in the near future14:40
yoobut since I have discovered your work in shogun, I would like to better understand the codes and mb contribute then.14:42
yooI still have some bp understand some partsof  the code architecture14:43
n4nd0yoo: do you know about applications of it?14:46
n4nd0yoo: I am looking for something appealing to present in my thesis14:46
yooI will take a look if you want .. no explosive idea in mind right now.14:50
yoolet me know if PR today ! +14:51
yoowhat is the subject of your thesis btw ?14:52
n4nd0yoo: completely open right now14:57
n4nd0nothing decided14:57
n4nd0I just like the idea of using the work I have been doing during this summer since I feel that I've learnt interesting things14:58
n4nd0+ the ones I am still to learn :)14:58
emreceliktenn4nd0: Congratulations15:02
n4nd0emrecelikten: thanks!15:06
CIA-18shogun: Sergey Lisitsyn master * r1cee68b / (13 files in 3 dirs): Multitask crossvalidation support -
-!- K0stIa [~kostia@2001:718:2:1634:76f0:6dff:fe92:4013] has left #shogun []15:29
-!- pluskid [~pluskid@] has quit [Ping timeout: 248 seconds]15:58
-!- pluskid [~pluskid@] has joined #shogun15:59
-!- gsomix [~gsomix@] has joined #shogun16:01
-!- pluskid [~pluskid@] has quit [Quit: Leaving]16:37
-!- heiko1 [] has joined #shogun16:38
* sonney2k ohh man it feels so good - true broadband internet :D16:40
@sonney2kgsomix, how are things?16:41
heiko1sonney2k, so you are finally back? :)16:41
@sonney2kheiko1, yes I am16:43
heiko1hope you had a good time16:43
-!- ckwidmer [8ca3fe9d@gateway/web/freenode/ip.] has joined #shogun16:44
gsomixsonney2k, still working on buffers for SG.16:44
gsomixI have some problems with it. Pointers crashes.16:45
@sonney2kgsomix, lets talk tonight or tomorrow16:45
gsomixok, tonight16:46
gsomixI can swim in the sea before. :D16:46
@sonney2kheiko1, yes indeed :)16:46
gsomixsonney2k, did you receive my letters?16:47
-!- n4nd0 [] has quit [Quit: leaving]16:49
-!- gsomix [~gsomix@] has quit [Ping timeout: 244 seconds]17:24
blackburnsonney2k: yes, did you receive his love letters17:46
CIA-18shogun: Heiko Strathmann master * re532b28 / (5 files in 2 dirs): Merge pull request #680 from karlnapf/master (+6 more commits...) -
-!- uricamic [~uricamic@2001:718:2:1634:6594:2095:459a:894f] has quit [Quit: Leaving.]17:56
-!- tiger_eye [] has left #shogun ["Leaving"]18:05
-!- puffin444 [62e3926e@gateway/web/freenode/ip.] has joined #shogun18:07
CIA-18shogun: Heiko Strathmann master * rda0380e / src/shogun/statistics/LinearTimeMMD.cpp : added threshold computation using inverse gaussian cdf -
CIA-18shogun: Heiko Strathmann master * rb762388 / (4 files in 3 dirs): Merge pull request #681 from karlnapf/master -
CIA-18shogun: Heiko Strathmann master * rf3492a1 / (3 files in 2 dirs): added inverse gaussian cdf plus tests -
-!- yoo [2eda6d52@gateway/web/freenode/ip.] has quit [Quit: Page closed]18:37
-!- blackburn [~blackburn@] has quit [Read error: Connection reset by peer]19:10
-!- puffin444_ [62e3926e@gateway/web/freenode/ip.] has joined #shogun19:12
-!- puffin444 [62e3926e@gateway/web/freenode/ip.] has quit [Ping timeout: 245 seconds]19:13
-!- blackburn [~blackburn@] has joined #shogun19:24
-!- heiko1 [] has left #shogun []19:31
-!- n4nd0 [] has joined #shogun20:28
n4nd0hi there20:28
n4nd0sonney2k: around?20:29
n4nd0blackburn: do you know if sonney2k is/will be around here this evening?20:29
blackburnn4nd0: I only know that he was online tomorrow :)20:30
n4nd0blackburn: ehh ok20:30
n4nd0like he will be tomorrow right?20:30
blackburnn4nd0: that's crazy but this happens with me for third time20:32
blackburntomorrow = today20:32
n4nd0haha ok :)20:33
blackburnno idea what makes me mix these words20:33
n4nd0don't worry, these things happen20:33
n4nd0something I talk to myself20:33
n4nd0n4nd0: like this20:33
-!- n4nd0 [] has quit [Ping timeout: 246 seconds]20:53
-!- gsomix [~gsomix@] has joined #shogun21:39
-!- gsomix [~gsomix@] has quit [Quit: Ex-Chat]21:50
-!- gsomix [~gsomix@] has joined #shogun21:51
-!- n4nd0 [] has joined #shogun21:52
gsomixgood evening21:52
-!- n4nd0 [] has quit [Ping timeout: 255 seconds]21:59
@sonney2kgsomix, I am around just now...22:02
@sonney2kgsomix, so what is the trouble with SG* ?22:02
@sonney2kthat these are not pointers?22:03
@sonney2kgsomix, and yes I received your emails so I didn't have to worry :)22:04
@sonney2kgsomix, I hope that you can fix this up within the next few days such that we can work on the model selection typemaps...22:08
-!- n4nd0 [] has joined #shogun22:11
blackburnn4nd0: sonney2k22:11
blackburnhave a good date22:12
@sonney2kblackburn, what?22:12
@sonney2kn4nd0, ???22:12
blackburnsonney2k: n4nd0 missed you22:13
@sonney2kI will go to bed soon but just now I am around :)22:13
n4nd0sonney2k: yes, I was writing my weekly report right now making you that question22:13
blackburnlike the desert misses the rain22:13
n4nd0I will ask now instead22:13
@sonney2kblackburn, :P22:13
n4nd0sonney2k: so now according to the timetable I should work on CRFs22:13
n4nd0sonney2k: but Georg cannot help me with that22:14
n4nd0Nico is still on holidays22:14
@sonney2kn4nd0, does Georg have some other ideas?22:14
n4nd0and we didn't talk about how they should fit into our SO framework22:14
blackburnn4nd0: may be you just hang around with guys, have some beer?22:14
@sonney2kI would expect CRFs to not be that different22:14
n4nd0sonney2k: yes, there is some things in the hmsvm toolbox that are not currently in my implementation22:15
n4nd0like the plifs22:15
@sonney2kn4nd0, and then when nico comes back continue to do CRFs ?22:15
@sonney2kn4nd0, plifs are some important thing - and they are in shogun already22:15
n4nd0sonney2k: I am afraid there won't be enough time to finish it before gsoc finishes22:15
blackburnmind to continue? ;)22:15
@sonney2kso certainly a nice replacement to do instead of CRFs for now22:15
n4nd0sonney2k: I know how plifs work in the hmsvm toolbox, and I think it won't take much time to add them22:16
@sonney2kn4nd0, well we all hope you stick around after gsoc ...22:16
n4nd0blackburn, sonney2k: sure I can work on that later22:16
n4nd0no problem for me22:16
blackburnn4nd0: life is ppain22:16
n4nd0but for Nico?22:16
blackburngsoc is hell22:16
@sonney2kn4nd0, look at the CPlif classes22:16
@sonney2kthey should do the plif business22:16
blackburnyou are all tied to shogun support for ever22:16
n4nd0sonney2k: ok, I will prepare the pull request and later do that22:16
n4nd0blackburn: :)22:17
@sonney2kblackburn, one shogun to rule them all or so :D22:17
@sonney2kn4nd0, I would even say that adding plifs is more important than CRFs22:17
n4nd0sonney2k: ok22:17
blackburnhow to use plifs?22:17
n4nd0sonney2k: but I mean, if I do it as they are done in the hmsvm toolbox22:17
blackburnit took 1.5 years for me to get what does it mean22:18
n4nd0sonney2k: it is not much code22:18
@sonney2kso if you can do the hmsvm toolbox implementation in shogun too - getting same result as hmsvm toolbox that would be even better22:18
n4nd0sonney2k: right now I get the exact same results for integer data :)22:18
@sonney2kblackburn, for what?22:18
n4nd0I am very happy with that22:18
@sonney2kn4nd0, thats a start :D22:18
blackburnsonney2k: how to use plifs for anything?22:18
* sonney2k too22:18
@sonney2kblackburn, ahh well consider that a linear function is not enough22:19
n4nd0sonney2k: believe me, it's more than that. At least in the hmsvm toolbox the code that is for the plifs changes very little compare to the other one22:19
@sonney2klike svm22:19
@sonney2kso you use a piece wise linear function22:19
n4nd0with the other one I mean just working with discrete observations22:19
blackburnsonney2k: but is plif a model or an output?22:19
n4nd0blackburn: it is part of the model22:19
@sonney2kblackburn, not a model - just some transformation22:20
@sonney2kwell or part of the model :D22:20
blackburnso it just to support some broken hyperplanes?22:20
@sonney2kfor svms you can emulate that by using these binneddotfeatures22:20
n4nd0sonney2k: and what about the coffin idea in the SO framework?22:21
blackburnsonney2k: I never thought of that mean of binned dot22:21
@sonney2kn4nd0, I thought you are doing it already - no?22:21
n4nd0sonney2k: mmm no22:21
@sonney2kblackburn, binned dot is making the svm learn a piecewise linearfunction22:22
@sonney2kwith fixed 'stuetzstellen;22:22
n4nd0Nico told me I should talk to you about that once I had nothing else to do :D22:22
blackburnsonney2k: I got it already22:22
@sonney2kn4nd0, hmmhh22:22
blackburnno f idea what is stuetzstellen22:22
blackburnsounds like luftwaffe22:22
blackburnor panzerkrieg22:22
@sonney2kblackburn, supporting points or so22:22
@sonney2kn4nd0, I was actually hoping that you can do that too...22:23
@sonney2kok gtg for now22:23
@sonney2kwill hopefully be around tomorrow too.22:23
n4nd0sonney2k: let's talk about it tomorrow then?22:23
blackburnn4nd0: can I help you with coffinization?22:23
blackburnI've got some experience with it22:23
n4nd0blackburn: nice, that would be great22:24
blackburnput your code into the coffin ya know22:24
n4nd0blackburn, sonney2k: I will focus on that after the PLiFs, ok?22:24
blackburnfocus on what?22:24
blackburnn4nd0: guide me to some solver code22:25
n4nd0on the coffinization22:25
n4nd0blackburn: what do you mean?22:25
n4nd0you want to see the code of my solver I understand :)22:25
blackburnn4nd0: well I mean I could take a look to the code22:25
n4nd0I am sorry for the debug traces ... they are going out soon22:26
blackburnn4nd0: I will never forgive you22:27
n4nd0I guess that the parts you are interested in are the ones related to the use of features22:27
gsomixsonney2k, hey22:27
n4nd0argmax function and compute_loss_arg22:27
gsomixoh, ok :)22:28
blackburnn4nd0: well compute_loss_arg is not really22:30
n4nd0blackburn: tell me why22:30
n4nd0we should be using an special kind of features I am afraid22:30
blackburnn4nd0: I can't see where do you use features there..22:30
n4nd0something that has access both to the labels and features22:30
blackburnare we talking about the same?22:30
n4nd0the problem is that in SO learning22:30
n4nd0the vecotr that goes in the product with w looks like this22:31
n4nd0x is a feature vector22:31
n4nd0y is a label22:31
blackburnso where do we have explicit access to data?22:32
n4nd0so to make it coffinizable I think it is required to use a special type of features for that22:32
n4nd0in the models22:32
blackburnpretty complex thing huh22:36
blackburnn4nd0: from what I see now it is not really needed to make it use add and dot22:36
n4nd0what arguments for these add and dot?22:37
blackburnn4nd0: basically most solvers work using two operations22:38
blackburnadd some train feature vector to some given vector22:38
blackburnand compute dot product of some train feature vector and given vector22:39
n4nd0that is done in compute_loss_arg22:39
n4nd0here the train feature vector are psi_pred and psi_truth22:40
n4nd0just the dot product actually22:40
blackburnthat is not the point of coffin22:40
blackburnmain point is that you don't need to have vectors itself explicitly22:41
n4nd0yes, I understood that when I read the paper22:41
n4nd0I didn't get though why it is so benefecious22:42
n4nd0and how to do it for SO22:42
blackburnn4nd0: first is easy question22:42
blackburnsecond is not :)22:42
blackburnn4nd0: okay imagine you have a set of images22:42
blackburnsay 1M of images22:42
blackburnit is a common way to extend dataset using some transformations22:43
blackburnrotations may be22:43
blackburnso if you don't need to have all images explicitly22:44
blackburnand if you have some procedure to compute dot on transformed images as efficiently22:44
blackburnyou easily extend your train set but your memory requirements do not raise up22:45
blackburnor other example - you have fast preprocessing routine (homogeneous kernel map is very fast for example)22:45
blackburnby default it makes d = 3*d22:45
blackburnso you need to have 3x more memory22:45
blackburnn4nd0: see what I mean?22:46
n4nd0but it is something that takes more time than having everything stored in memory right? I understand that it is beneficious in any case because for large datasets everything won't fit in memory simply22:46
n4nd0blackburn: yes, but ^22:46
n4nd0just to be sure22:46
blackburnn4nd0: yes that can require more time22:47
n4nd0for the rotations example22:47
blackburnnot the best example probably22:47
n4nd0the same rotation will be done more than once for example22:47
blackburnwhat if we just cut22:47
blackburnvarious sliding window positions22:47
blackburnyou need to have only one image and it can represent 100 vectors22:48
blackburnwith the same memory req22:48
blackburnyou just compute dot with different initial position or so22:48
n4nd0I didn't get this example22:49
blackburn0 0 022:49
blackburn0 0 022:49
blackburn0 0 022:49
blackburn^ image22:49
blackburn1 1 022:49
blackburn1 1 022:49
blackburn0 0 022:49
blackburnfirst window position22:49
blackburn0 1 122:49
blackburn0 1 122:49
blackburn0 0 022:49
blackburnso on22:50
blackburnn4nd0: it is not that costly to compute dot or add22:50
blackburnwith just a region of image available22:50
blackburnn4nd0: sparse uses it too22:53
n4nd0I don't think I understand the real magic of coffin22:53
-!- gsomix [~gsomix@] has quit [Ping timeout: 264 seconds]22:53
blackburnn4nd0: hmmm I am in search of good example :D22:56
n4nd0maybe I am just looking for an easter egg22:57
n4nd0sonney2k told me that COFFIN was quite relevant22:57
blackburnn4nd0: well it is actually a good soft engineering example as well22:58
blackburnyou don't have to dispatch what type of features you are working on22:58
blackburnyou just make features do that for you22:58
blackburnso you don't care about storage/preprocessing/anytihng22:58
blackburnif you work with sparse vectors22:59
blackburnyou can't get pointer to dense vector to add with22:59
blackburnyou have to handle it somehow everywhere22:59
blackburnwith coffin you don't care22:59
CIA-18shogun: Sergey Lisitsyn master * r7893db8 / (3 files): Removed multitask subset support -
-!- ckwidmer [8ca3fe9d@gateway/web/freenode/ip.] has quit [Ping timeout: 245 seconds]23:09
-!- zxtx [] has quit [Ping timeout: 244 seconds]23:10
-!- zxtx [] has joined #shogun23:12
--- Log closed Tue Jul 31 00:00:17 2012