n4nd0 blackburn blackburn --- Log opened Sun Apr 15 00:00:19 2012 -!- karlnapf [~heiko@host86-181-156-211.range86-181.btcentralplus.com] has quit [Quit: Leaving.] 01:55 -!- blackburn [~qdrgsm@109.226.90.219] has left #shogun [] 02:09 -!- harshit_ [~harshit@182.68.181.141] has quit [Remote host closed the connection] 02:10 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] 02:48 -!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] 05:34 -!- wiking [~wiking@78-23-189-112.access.telenet.be] has joined #shogun 09:11 -!- wiking [~wiking@78-23-189-112.access.telenet.be] has quit [Changing host] 09:11 -!- wiking [~wiking@huwico/staff/wiking] has joined #shogun 09:11 -!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] 09:17 -!- wiking [~wiking@78-23-189-112.access.telenet.be] has joined #shogun 09:43 -!- wiking [~wiking@78-23-189-112.access.telenet.be] has quit [Changing host] 09:43 -!- wiking [~wiking@huwico/staff/wiking] has joined #shogun 09:43 -!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection] 09:43 -!- wiking [~wiking@vpna079.ugent.be] has joined #shogun 09:49 -!- wiking [~wiking@vpna079.ugent.be] has quit [Changing host] 09:49 -!- wiking [~wiking@huwico/staff/wiking] has joined #shogun 09:49 -!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] 10:10 -!- blackburn [~qdrgsm@109.226.90.219] has joined #shogun 10:20 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun 10:28 good morning! 10:29 hey 10:35 easter today here finally :D 10:35 good :) 10:37 is it holidays there too? 10:38 -!- naywhayare [~ryan@spoon.lugatgt.org] has quit [Ping timeout: 272 seconds] 10:38 -!- naywhayare [~ryan@spoon.lugatgt.org] has joined #shogun 10:38 n4nd0: yes kind of 10:41 blackburn: bye bye to m_q then? 10:48 n4nd0: yeah I think so 10:48 blackburn: should we wait and ask sonney2k? 10:49 I don't think so 10:49 n4nd0: can you imagine it is useful? 10:49 blackburn: well it is just an heuristic to give more importance to the class of the points that are closer 10:50 I think it actually makes sense 10:50 it would give ~ result as reducing k 10:51 but to tell the truth I don't know if it makes a real difference in practice 10:51 yes exactly 10:51 no need for that 10:51 all right 10:51 will drop it then ;) 10:51 please :) 10:51 hahahahah putin came with moscow's mayor instead of wife lol 10:54 http://www.youtube.com/watch?feature=player_detailpage&v=OGqCvTHrXmo 10:54 n4nd0: check this shameful cults full of gold and opulence 10:54 yeah, I am watching it 10:55 I don't understand that much though :P 10:55 n4nd0: just imagine how much money they spent on this golden shit 10:56 n4nd0: and there is funny story that putin dumped his wife - and we were laughing he bring mayor instead of his wife :D 10:57 it is ok to be gay lol 10:58 haha 10:58 n4nd0: btw legislatives here are going to introduce law that punishes gays because of gay advocacy 10:59 how wild is it for you? 10:59 -!- wiking [~wiking@huwico/staff/wiking] has joined #shogun 11:03 n4nd0: http://www.parashift.com/c++-faq-lite/containers.html#faq-34.1 we should adv it to soeren 11:04 blackburn: yeah, we should do that! 11:04 n4nd0: btw do you like libs extraction? 11:04 blackburn: what is it? 11:05 https://github.com/shogun-toolbox/shogun/pull/454 11:05 n4nd0: this one ^ 11:05 blackburn: aham, so to move code from another libraries to an specific directory? 11:06 yes 11:06 blackburn: yes, I like it - I think the project is better ordered like that 11:07 blackburn: is there any con? 11:07 I don't see any 11:07 I agree then 11:08 I should probably move the cover tree there 11:10 n4nd0: to avoid some merge issues better stay it there 11:12 and we will move it later 11:12 ok 11:12 -!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection] 11:18 -!- wiking [~wiking@huwico/staff/wiking] has joined #shogun 11:18 blackburn: I like the article about containers 11:19 yeah this faq is nice 11:20 there are other faqs covering whole C++ 11:20 blackburn: I'd actually like to see more C++ style in the project 11:21 any other suggestions? 11:22 I like the STL too 11:24 algorithms? 11:25 and as I think I told you once and pluskid also suggested, smart pointers could be a good idea 11:25 smart pointers - yes, agree 11:25 blackburn: for algorithms and for the containers 11:25 ohhh 11:26 blackburn: to use iterators 11:26 I would like to see a good example of usage 11:26 like how to use some of the functions there? 11:27 blackburn:  :)))) 11:27 sonney2k: does not like STL afaik 11:28 i've told 2 months ago to have iterators on various types like sgvector or sgmatrix would be great to see 11:28 n4nd0: yes I can't see because I do not think in the way :D 11:28 wiking: we have iterators on dot features 11:28 we should buy a present to sonney2k 11:29 http://www.amazon.com/Effective-STL-Specific-Standard-Template/dp/0201749629 11:29 ahahhahaha 11:29 send one  for me please too 11:30 haha 11:30 lol man i've completely overseen this function set_combined_kernel_weight 11:30 okay nevermind I downloaded a torrent lol 11:30 \o/ 11:31 blackburn: be careful man, logged chat :P 11:31 n4nd0: recall the country I am from 11:31 HEY THERE I AM DOWNLOADING TORRENTS 11:31 in that book they recommend and talk a bit about for_each 11:32 hehehehe 11:32 is cool 11:32 but then again 11:32 it really depends 11:32 I have never used myself it but it looks nice 11:32 sometimes the whole STL thing is a bit like: "shooting on a bird with a cannon" 11:32 so sometimes you are just better of with valarray 11:35 it's much faster than any other STL container tyoe 11:35 type 11:35 so for example in case of SGVector having a valarray would be good 11:38 wih valarray so mena a normal T*? 11:38 I'm sorry I didn't understand properly :S 11:38 and what about to use the array class in C++11 for that? 11:41 mmm valarray by far the fastst 11:41 fastest 11:41 and simply have a vector like 11:41 valarray vec 11:41 and still you can have some funky operations on them 11:42 with the help of apply function 11:42 n4nd0: and the problem with array that its c++11 11:46 so you'll have some problems with code portability 11:46 while valarray is standard C++ 11:46 I had no idea about this valarray 11:47 I think it is cool 11:47 anyhow valarray is just a good container when you have SIMD operations on your machine... so like any normal cpu nowadays 11:47 mmx, sse, sse2 11:47 any gcc will be able to optimize your code for using SIMD 11:48 with valarray 11:48 but valarray is basically good for built-in types... like float, int etc 11:50 it's not a good container for objects or pointers for objects 11:50 basically it's not meant for that :)) 11:50 I see 11:50 for that u use some standard STL container 11:50 maaaan amazing :) 12:21 done it :) 12:22 \o/ 12:22 blackburn: what ya say about the valarray? 12:22 wiking: I heard it is something like bringing fortran speed to C++? 12:22 ahahahahha 12:22 you must have read stackoverflow 12:23 :> 12:23 yeap :D 12:23 wiking: I need to override protected method of base of base class in derived. how do you think it should be done? 12:28 so you want to override the function a in class c where c : b, b: a 12:29 ? 12:29 a sorry this was stupid 12:29 so you want to override the function x in class c where c : b, b: a and a.x ? 12:30 so that x is defined in a... 12:30 why would virtual not work? 12:30 wiking: exactly 12:30 wiking: it would work but it is not visible from c 12:30 ah ok 12:30 I guess I need to declare it in b? 12:31 well in case of c a class is simply a struct no? 12:31 wiking: damn that would be easy if it was public but I don't want to expose these methods :D 12:32 mmm wait a second 12:33 however I need to redeclare it again in C 12:33 how do you want to access that method at all in c? 12:33 wiking: isn't it possible? 12:33 well afaik no 12:33 i mean it's not going to create you function pointers or something 12:33 and simple c does not understand the syntax a.function () 12:34 unless there's a function pointer there in struct a 12:34 wiking: but it will work in case of public virtual 12:34 mmm have u tried? 12:34 i mean i'll try it for you now :) 12:35 wiking: why not? I guess I did not describe it right 12:35 I need to redefine method of A in C 12:35 yeah i got that 12:35 i'm just curious how will it work at all in ansi c 12:36 the only problem is visibility 12:36 while you are having classes 12:36 so for example my compiler dies on this :) 12:37 http://snipt.org/uhfL6#expand 12:38 if u compile it with gcc 12:38 as if you want to have c 12:38 what's this? 12:38 well a very simple example trying to use a class in ansi c 12:38 that this way it won't work 12:39 so i'm wondering how do you want to call a class and it's method in ansi c 12:39 why ansi c? 12:39 aah fuck 12:39 :DDD 12:39 blackburn: wiking: it would work but it is not visible from c 12:39 :D 12:39 damn :D 12:39 wiking: no the problem is 12:39 i've got it 12:39 what's the problem now 12:39 why wouldn't it inherit 12:40 I have protected virtual method in A 12:40 yes i got that now 12:40 will it be available from C? 12:40 i wonder why isn't it getting as well down to C 12:40 hmm let me check again 12:40 class friend is not transitive i know 12:40 but inheritance of a fund... 12:40 function.. 12:40 works for me 12:43 blackburn http://snipt.org/uhfM8#expand 12:44 hmm fine thanks 12:44 what was the problem? 12:44 wiking: currently I have no idea if there was a problem :D 12:45 ah ok 12:45 anyhow this what you've told should work 12:45 even if it's not redefined in class B 12:45 wiking: yes it seems so :) 12:46 wiking: btw did you postponed latent stuff a little? 12:48 blackburn: no why? 12:49 just curious :) 12:49 blackburn: working on it... 12:49 just had a chat with alex the other day 12:49 we are finally on the same page 12:50 as obviously from our emails we had a misunderstanding 12:50 I see 12:50 now we 'realized' that yes actually we need the LatentFeatures 12:50 class 12:50 hah 12:50 wiking: btw where did you place your classes? 12:50 otherwise we couldn't optimize the latent part 12:50 blackburn: well now they are still there where i've committed them first 12:51 I suggest latent both for latentfeatures and other latent stuff 12:51 shogun/latent/ 12:51 I mean 12:51 aha 12:51 so you want to move everything under shogun/latent 12:51 at one place? 12:51 even LatentLabel and LatentFeature ? 12:51 wiking: why not? 12:52 i'm just asking 12:52 I think it is a domain for that :) 12:52 it's a bit confusion 12:52 do you think so? 12:52 i mean even to have the features and the labels 12:52 i got the point for having the machine separated 12:53 n4nd0: what do you think about that ^? 12:53 blackburn: I have to go back in the conversation 12:53 my opinion is that base stuff should be as is but domains are separated 12:53 although maybe it'  should be something like 12:53 shogun/classifier/latent 12:53 just as in case of mkl 12:53 n4nd0: do you think all the latent stuff should go shogun/latent? 12:53 or? 12:53 blackburn: I am correcting a Spanish easy for my gf and I lost part of the conversation 12:54 n4nd0: :DDD 12:54 :D 12:54 n4nd0: no need to dig into whole conversation :) 12:54 wiking: can't it be multiclass?> 12:54 well it should be 12:54 regression? 12:55 for me otherwise it's not much of a use 12:55 or even structured output? :) 12:55 yep 12:55 yep 12:55 structured output 12:55 if so I think it should go to shogun/latent 12:55 because it won't be 'classifier' in standart mean 12:55 however let us discuss it later with soeren 12:56 ok 12:56 let me know 12:56 i'll continue as i've started earlier 12:56 yes it is not hard change so 12:56 no need to worry :) 12:56 yeah just a git mv 12:56 -!- wiking_ [~wiking@78-23-189-112.access.telenet.be] has joined #shogun 12:58 -!- wiking_ [~wiking@78-23-189-112.access.telenet.be] has quit [Changing host] 12:58 -!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun 12:58 -!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 265 seconds] 13:01 -!- wiking_ is now known as wiking 13:01 -!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection] 13:03 -!- wiking [~wiking@vpna032.ugent.be] has joined #shogun 13:04 -!- wiking [~wiking@vpna032.ugent.be] has quit [Changing host] 13:04 -!- wiking [~wiking@huwico/staff/wiking] has joined #shogun 13:04 blackburn: why LatentFeatures wouldn't be ok in shogun/features? 13:07 -!- pluskid [~pluskid@1.204.107.190] has joined #shogun 13:07 n4nd0: it would but it makes sense to store domain-specific things in separate folder (for me) 13:08 blackburn: yes, it actually makes sense too 13:09 blackburn: I guess that in this case, LatentFeatures, it is a matter of taste 13:09 blackburn: it probably makes sense both under latent/ and under shogun/features 13:10 I moved MulticlassSVM.{h,cpp} to multiclass folder, but why I always get: *** No rule to make target ../../shogun/classifier/svm/MultiClassSVM.h', needed by modshogun_wrap.cxx'.  Stop. 13:11 I can't find where the path is specified 13:11 pluskid: find shogn | xargs grep svm/MultiClassSVM.h 13:11 shogun* 13:11 find shogun* 13:11 :D 13:11 blackburn: deleting .o would solve that? 13:12 n4nd0: yes and .cxx. but only in the case that MultiClassSVM is not referenced by some include 13:12 pluskid: did you change include in multiclasslibsvm? 13:13 I changed a lot 13:13 btw, the refactoring is a huge project 13:13 pluskid: did you 'make clean'? 13:13 there are dozens of multiclass svm subclasses 13:14 and even GUI staffs are related 13:14 yes I did make clean 13:14 I'll try again 13:14 Scatter, LaRank, GMNP, LibSVM 13:14 MKLMulticlass 13:14 ah no 13:14 shogun/classifier/svm/ScatterSVM.h:#include 13:14 shogun/classifier/svm/LibSVMMultiClass.h:#include 13:14 shogun/classifier/svm/MultiClassSVM.cpp:#include 13:14 shogun/classifier/svm/GMNPSVM.h:#include 13:14 shogun/classifier/svm/LaRank.h:#include 13:14 pluskid: ^ 13:15 I see 13:15 thanks 13:15 interfaces/modular/Classifier_includes.i: #include 13:15 interfaces/modular/Classifier.i:%include 13:15 pluskid: so are you unhappy doing it ? :) 13:16 blackburn, I'd rather not give up at this moment 13:17 :p 13:17 pluskid: it requires some careful work but it is really needed 13:18 blackburn, yes, I see 13:18 there are lots of duplicated/similar code in MulticlassSVM 13:18 and the new multiclass facility 13:19 pluskid: similar to multiclass machine? 13:19 yes 13:19 pluskid: actually you may remove it 13:19 MulticlassSVM? 13:19 no 13:19 I don't think so, too :) 13:20 pluskid: I mean duplicated stuff 13:20 yes 13:20 pluskid: just derive it from kernel multiclass machine 13:20 and add there required methods 13:20 yes, that's what I'm doing 13:20 :) 13:20 pluskid: did you catch the idea of multiclass stuff? 13:20 i mean that generic thing 13:20 I think yes 13:20 while GMNP/LaRank/Scatter are all OvR 13:21 it would be easy 13:21 but the generic multiclass training procedure is much different from our multiclass-svms 13:21 it seems 13:21 it should be overrided with svms 13:21 so GMNP/LaRank/Scatter all cannot support OvO? 13:21 pluskid: no 13:21 GMNP is modified Weston-Watkins formulation 13:22 and LaRank is primal Crammer-Singer 13:22 so no way to do OvO 13:22 see you later guys 13:23 didn't know those methods 13:23 pluskid: for example in gmnpsvm all you need is to change set_svm part 13:23 cu n4nd0 13:23 n4nd0: see you 13:23 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] 13:23 yes 13:23 I mean training will stay 13:23 all you need is to set machines 13:23 -!- ishaanmlhtr [~chatzilla@101.63.111.140] has joined #shogun 13:24 currently, I'm using some regression test to assure the result before and after refactoring is the same 13:24 ensure 13:24 nice 13:24 pluskid: the formulations are 13:24 C-S: \sum_m ||w_m||^2 + C \sum_i \xi_i      s.t.    -\geq 1 - \delta_yi,m + \xi_i 13:25 W-W: \sum_m ||w_m||^2 + C!!! \sum_m !!! \sum_i \xi_i      s.t.    -\geq 1 - \delta_yi,m + \xi_i 13:25 m = 0, ... , K  --- classes 13:25 i = 1, ... , N --- train vectors 13:25 the only difference is class slacks 13:26 what is C-S and W-W denoting? 13:27 Crammer-Singer 13:28 Weston-Watkins 13:28 http://www.cis.uab.edu/zhang/Spam-mining-papers/On.the.Algorithmic.Implementation.of.Multiclass.Kernel.based.Vector.machines.pdf 13:28 ah, thanks for the reference 13:28 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.9594 13:29 pluskid: W-W is rather infeasible but GMNP does a magic trick for that 13:29 magic? :D 13:30 it maps the problem to single-class one using kesler's transformation 13:30 yes kind of 13:30 hmm, lots of magic for me :D 13:32 I won't understand all those before reading the papers 13:32 ok have to go 13:36 pluskid: see you 13:36 -!- blackburn [~qdrgsm@109.226.90.219] has left #shogun [] 13:36 cu 13:37 -!- PhilTillet [~Philippe@157.159.42.154] has joined #shogun 13:48 -!- ishaanmlhtr [~chatzilla@101.63.111.140] has quit [Quit: ChatZilla 0.9.88.2 [Firefox 9.0.1/20111220165912]] 13:49 -!- pluskid [~pluskid@1.204.107.190] has quit [Ping timeout: 260 seconds] 14:06 -!- pluskid [~pluskid@173.254.214.60] has joined #shogun 14:07 -!- Netsplit *.net <-> *.split quits: naywhayare 15:07 -!- Netsplit *.net <-> *.split quits: shogun-buildbot, PhilTillet, pluskid, CIA-64, @sonney2k, wiking, vikram360 15:07 -!- Netsplit over, joins: pluskid, PhilTillet, wiking, naywhayare, vikram360, shogun-buildbot, CIA-64, @sonney2k 15:10 -!- Netsplit *.net <-> *.split quits: naywhayare, pluskid, PhilTillet, shogun-buildbot, CIA-64, wiking, @sonney2k, vikram360 15:11 -!- Netsplit over, joins: pluskid, PhilTillet, wiking, naywhayare, vikram360, shogun-buildbot, CIA-64, @sonney2k 15:17 -!- blackburn [~qdrgsm@188.168.2.179] has joined #shogun 17:05 -!- pluskid [~pluskid@173.254.214.60] has quit [Quit: Leaving] 17:14 -!- blackburn [~qdrgsm@188.168.2.179] has quit [Ping timeout: 252 seconds] 17:39 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun 18:13 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 245 seconds] 18:24 -!- PhilTillet [~Philippe@157.159.42.154] has quit [Ping timeout: 265 seconds] 18:38 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun 18:57 -!- PhilTillet [~Philippe@npasserelle10.minet.net] has joined #shogun 19:13 -!- flxb [~cronor@fb.ml.tu-berlin.de] has joined #shogun 19:32 If I use a kernel normalizer and init the same kernel object with train data first and test data later, can i be sure the normalization is done properly? so that test and train data are normalized the same way? 19:33 also, does shogun provide methods to center data in feature space? 19:34 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] 19:52 -!- PhilTillet [~Philippe@npasserelle10.minet.net] has quit [Remote host closed the connection] 19:56 -!- blackburn [5bde8018@gateway/web/freenode/ip.91.222.128.24] has joined #shogun 20:15 flxb: 1) yes while normalizers are applied on-the-fly 2) PruneVarSubMean should work for that 20:15 blackburn: PruneVarSubMean centers the input features, correct? Is there a way to center in kernel feature space? 20:17 flxb: uh you mean center kernel matrix? 20:18 blackburn: yes 20:18 flxb: I have to ask you how big is your kernel? 20:20 blackburn: 3k x 3k 20:20 not very big 20:20 ah nevermind I find it 20:20 ZeroMeanCenterKernelNormalizer 20:20 (I was thinking about suggesting to compute whole matrix and center it) 20:21 ahh, there is a normalizer for that? now how can i apply to normalizers? zero mean and avg diag 20:21 flxb: what do you mean? 20:22 you need both? 20:22 yes 20:22 i want to center the kernel and normalize it 20:22 blackburn, hmmhh where do you draw the line with the 'external' patch? 20:23 sonney2k: what line? how to choose if lib is external? 20:23 for example svmlight is also taken from other source, SVMLin too 20:23 sonney2k: ah I just was lazy to complete that 20:24 libsvm is heavily modified, svmlight too 20:24 and wanted to show proof-of-concept 20:24 sonney2k: yes modifications are marked with shogun_* 20:24 flxb: I am afraid this is not really possible now 20:24 however I can suggest a trick for that 20:25 you may attach first normalizer and compute custom kernel with it 20:25 and the attach second to the custom kernel 20:25 blackburn: yes this is a good idea 20:25 blackburn, ok 20:25 sonney2k: do you like it? 20:25 I don't really understand what this is for though? 20:26 what does it help? 20:26 sonney2k: better structure 20:26 I was (and others were) confused with a lot of .h and .cpp in svm folder 20:26 another issue is cross-dependencies of stuff like ocas 20:27 multiclass ocas was referencing to classifier/svm/libocas 20:27 blackburn: now, i would do: k = GaussianKernel(), k.set_normalizer(), k.init(train,train), k_tr = k.get_kernel_matrix(), k.init(train, test), k_te = k.get_kernel_matrix(). Would this work? I'm worried that train and test kernel have a different normalization afterwards... 20:27 blackburn, yeah but isn't it more natural to include shogun/classifier/svm/Tron.h than sth in lib/external ? 20:28 for users of libshogun I mean 20:28 I don't know - they should know what tron is 20:28 flxb: should work I think however that can be different.. sonney2k can you help out there? 20:29 flxb, yes that will work init is done based on lhs only if there is some train data specific stuff to do 20:31 wow farbrausch code 20:31 sonney2k, blackburn: cool thanks you two! i will try it out 20:31 I always wanted to check this demoscene code 20:32 blackburn, yeah crazy... and he blogged about some cooperative garbage collector significantly simplifying code w/o speed loss 20:33 and no longer incref / decref :) 20:33 s/he/chaos/ 20:33 sonney2k: these guys can teach us a lot of things :D 20:34 blackburn, regarding lib/external I have a different opinion: I think we should put everything in there that is not meant to be seen from the outside 20:35 but not the things that make sense to be directly included 20:35 for example: if we have a severely modifed CLibSVM class that is meant to be directly used then it should stay in the hierarchy 20:37 the SVM_libsvm.{cpp,h} stuff should be in lib/external 20:38 sonney2k: could you please provide example what should stay and what should go to lib/external? 20:38 that was the example ^ 20:39 uhoh what's with me 20:39 I can't understand :D 20:39 CTron / LibLinear would stay 20:39 why? 20:39 libqp would go 20:40 because of serious modifications? 20:40 it is CLibLinear now 20:41 SVM_linear* would go 20:41 sonney2k: ok is there any library I moved that should go back? 20:41 Tron 20:44 sonney2k: because of? 20:44 has there been any decision now on where the latent stuff should reside? 20:45 wiking: no but thanks for remind :) 20:45 heheh sonney2k read logs ;) 20:45 sonney2k: I mean Tron is not visible from the outside and it is external code, why should it stay here 20:46 -!- gsomix [~gsomix@188.168.128.177] has joined #shogun 20:53 hi there 20:53 blackburn, it is a CSGObject derived object that can be used from external code 20:53 sonney2k: makes sense then 20:55 wiking, ? 20:55 sonney2k: ok another issue - do you agree all the latent stuff 20:55 should go to shogun/latent 20:55 latentfeatures,latentlabels,anything 20:55 sonney2k: blackburn is just explaining it... 20:55 blackburn, why not have it under features and classifier as normal? 20:56 I mean we don't have shogun/dot or anything 20:56 sonney2k: I think that domain specific separation is nice 20:56 so a few more classes dont' hurt 20:56 latent? no? domainadaptation -> yes 20:57 so for transfer learning stuff you agree? 20:57 yes - there is a sh*tload of algorithms for that 20:58 so it makes sense to have them in separate dir 20:58 shitload? it seems you like transfer learning :D 20:58 no 20:58 it usually doesn't gain you anything 20:58 (like MKL) 20:58 sonney2k: 'like' 20:59 there are exceptions to the rule but the costs are exorbitantly massive 20:59 sonney2k: great to see that you even had a JMLR article about it ;P 21:00 sonney2k: but chris reported some gain ;) 21:00 with tree stuff 21:00 -!- harshit_ [~harshit@182.68.246.67] has joined #shogun 21:00 wiking, 2 even! 21:01 and you are 'trashing' it on irc :))) 21:01 sonney2k: that is something funny :) 21:01 sonney2k: I am afraid to ask what do you think about ocas? 21:01 wiking, in the first article we just developed a faster algorithm to solve MKL 21:02 and cutting planes in general? 21:02 and we used it to understand the classifier 21:02 not to improve learning result - which mkl only rarely does 21:02 in second article (Lp norm mkl) we managed to improve some state-of-the-art results *slightly* 21:03 :)))) 21:03 and that's still worth a jmlr article :) 21:03 so nice 21:03 i really need one journal article soon :) 21:03 and my $#@$@ work doesn't worth :E 21:04 the papers have a lot more than that 21:04 heheheh yeah i guessed 21:04 but that is the essence of MKL 21:04 -!- PhilTillet [~Philippe@npasserelle10.minet.net] has joined #shogun 21:04 if you want to use it - try L2-norm MKL 21:04 i've treid 21:04 tried... was giving really bad results :( 21:05 that will always improve or stay the same (at least on data I had) 21:05 smells like a bug then 21:05 wiking, I guess you used multiclass? no idea if alex binders implementation is well tested 21:05 dunno really i've just tried MKL on a dataset... i'm getting 7-8% worse results (in accuracy) than with MKL 21:06 we did only binary 21:06 i was mening 21:06 parse error... 21:06 meaning worse results than with simple combined kernel 21:06 and GMNP 21:06 gmnp with combined kernel is better than mkl 21:07 the only thing is that if i want the 'best' result than i have to do a parameter search on the weights for the various kernels 21:08 wiking, the (unweighted) combined kernel is *very hard* to beat 21:15 why is it called set_full_kernel_matrix_from_full and not simply set_kernel_matrix? does it do something special? 21:17 sonney2k: !! I think we should remove m_q 21:18 totally useless 21:18 flxb: it can be set full from triangle 21:18 and triangle from full 21:19 some cases there 21:19 ah 21:19 sonney2k: no I shall convince you to remove q 21:22 -!- puffin444 [230bf329@gateway/web/freenode/ip.35.11.243.41] has joined #shogun 21:23 Hi 21:24 hey 21:25 How's it going? 21:25 blackburn, why is it useless? 21:26 sonney2k: it is VERY similar to reducing k 21:26 sonney2k: small q leads to k=1 virtually 21:27 it is not of much interest when you do knn 21:27 blackburn, why is the buildbot failing? 21:27 I didn't know it is failing :D 21:27 blackburn, http://shogun-toolbox.org/buildbot/builders/nightly_none/builds/210/steps/compile/logs/stdio 21:28 puffin444, strong currents currently :) 21:28 blackburn, Re q I don't see this as a problem 21:30 sonney2k: hmm I thought I fixed that 21:30 sonney2k: it is useless 21:30 blackburn, so q>=1 has no effect on solution right? 21:31 sonney2k: q>=1?? 21:31 except if distance < 1 21:31 it will be farest neighbor 21:31 :D 21:31 sonney2k: that is not related to distances 21:32 while it is a multiplier 21:32 * sonney2k checks the formula 21:36 sonney2k: with q=1 it +1 to class histogram 21:36 with q=0.5 it +0.5, +0.25, ... to class histogram 21:37 ahh so points far away get less influence 21:37 yes sounds consistent but it makes no sense in practice 21:37 why not? 21:37 better use parzen windows for that 21:37 that is not really a reason 21:38 I don't think this kind of weighting makes sense 21:38 I do 21:39 sonney2k: and it makes things harder while covertree do not place vectors in order 21:39 yeah but that doesn't really matter 21:40 I mean we can just have a speed up for q=1 21:40 and otherwise default to what we had 21:40 sonney2k: ah okay it is hard to convince you :) 21:40 which we should keep anyways 21:40 blackburn, well you implemneted this and it sounds like it makes a lot of sense to me 21:41 sonney2k: I still think it is a stupid heuristics 21:41 it protects you a bit from outliers - so I could imagine higher k to be useful in k-NN (I'ver never seen k>7 or so to be useful for q=1) 21:44 sonney2k: with k=10 and q=0.5 farest vector will have weight 1e-5 :) 21:45 with C=1e-5 SVM alphas will be mostly 0 21:46 it is the same like there was no vector at all 21:46 obviously 0.5 is an extreme choice already... 21:47 0.9? 0.99? 21:47 almost no influence at all 21:47 0.9**10 -> 0.34 21:48 anyways 21:48 what to do with harshits TSVM? 21:49 sonney2k: ask harshit_ :D 21:49 blackburn, what I meant is have you looked at the code? 21:54 PhilTillet, I think for nearest centroid it doesn't make sense to have a store_model_features method at all 21:55 sonney2k: no while he said it is incomplete 21:55 I mean you have to store the centroids 21:55 that is what was meant with store model features 21:56 hmm okay, and I have no real choice but to store them during training :p 21:57 in the context of e.g. SVM it makes sense to not copy all the SVs (because there can be many and they can be huge) but just their indices 21:57 yep 21:57 got it 21:57 so store_model_features has a meaning :) 21:57 PhilTillet, ok so please put the call to kernel init back in and all good :) 21:57 PhilTillet, another thing - you assume that you are getting CSimpleFeatures - but you should check (at least add some ASSERT()'s checking for get_feature_class() = CT_SIMPLE()) 22:01 PhilTillet, ahh and I am wondering where m_shrinking is used? 22:02 Ah, I had a question concerning that :D (not yet implemented) 22:03 should the centroids be shrunken after training, or shrunken when classifying? 22:03 first option make classification faster but second option avoid re-training when changing m_shrinking 22:04 so I don't really know :p 22:04 PhilTillet, I guess doing both would resolve the conflict :) 22:05 i.e. do it in training 22:06 and add some apply_shrinked() 22:06 function that would do the shrinking would do 22:06 ok:p 22:10 good night, guys 22:11 -!- flxb [~cronor@fb.ml.tu-berlin.de] has left #shogun [] 22:14 gsomix, nite 22:15 Is it night for most people here? 22:18 sonney2k blackburn so we have a consensus that the latent stuff will go to the usual places, i.e. latentfeatures and latentlabel class under shogun/features and the latentlinearclassifier under classifier/svm/ ? 22:21 wiking, yes 22:24 ok cool 22:24 puffin444, yes 22hrs  here and 24hrs in russia :) 22:24 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun 22:26 hi n4nd0 22:26 hey sonney2k 22:26 how is it going? 22:26 and hi to everyone else too :) 22:27 Hey 22:27 n4nd0, just had a look at your patch - nice job :) 22:31 puffin444, can we get some nice shiny graphical example for your GP implementation - I think olivier has sth in his (python based) toolbox :) 22:31 sonney2k: thanks! I have your messages, I will start making the modifications and continue with the job this next week 22:31 -!- blackburn [5bde8018@gateway/web/freenode/ip.91.222.128.24] has quit [Ping timeout: 245 seconds] 22:33 That's something I could try. Is there visualiztion stuff in the Python extensions? 22:35 puffin444, not really - just use the excellent matplotlib - it's gallery/examples certain have what you need :) 22:44 Sounds good. 22:46 I think there is a nice plot in scikits for GPs 22:46 puffin444: you could use it as inspiration 22:46 I'll take a look at it. 22:47 http://scikit-learn.org/0.10/modules/gaussian_process.html 22:47 n4nd0, heh - they certainly have some nice documentation 22:48 I don't understand why no one here wants to document things 22:48 sonney2k: you mean within the code or in web format? 22:48 even though it is much easier just to write a little doxygen code no one writes complex formulas or anything in the class descripton 22:49 sonney2k: aham I see 22:50 personally I like more something like the one in scikits 22:50 I think they do it with sphinx 22:51 sonney2k: would you like to see that in shogun? 22:52 n4nd0, the problem is more that someone has to write all this stuff 22:53 *and* keep it up-to-date 22:53 maybe we could start with it step by step 22:53 sonney2k: hey, TSVM is not complete for now. I think there is some problem with tsvm part of ssl.cpp, which currently I am figuring out. 22:54 also due to university tests i am not able to give much time so hopefully that would be done by next sunday 22:54 Yeah it looks pretty, but I don't know how much time a scikit-like graph would take. 22:56 puffin444: do you mean to code it? 22:56 harshit_, ok np! university tests are certainly more important! 22:56 Yeah I don't know if I can code this week or not. 22:56 *code it* 22:57 puffin444: aham, all right 22:57 puffin444: anyway, in case you didn't see it, here it is the code to do that figure 22:57 http://scikit-learn.org/0.10/auto_examples/gaussian_process/plot_gp_regression.html 22:58 puffin444: if the method you have implemented in shogun has a similar interface to the one there, you wouldn't need to change much more than the actual part to train the GP 22:58 all the plot stuff is there 22:59 you can change the colours, the function to predict and you have there a complete new example of your own :) 22:59 Oh I see. 23:00 puffin444: it is just a suggestion of course, I hope you don't mind I said it! 23:00 No that's fine. Yes this might work. My time is just somewhat limited as I have a major exam next week. 23:01 -!- blackburn [5bde8018@gateway/web/freenode/ip.91.222.128.24] has joined #shogun 23:11 n4nd0: have you seen sonney2k wants q in knn? 23:15 blackburn: yeah 23:15 blackburn: I told you it could be better to wait ;) 23:16 blackburn: no way to convince him about it? 23:16 n4nd0: I am still think it should go :D 23:17 sure* 23:17 n4nd0: I don't know probably no way :D 23:17 haha ok, let's keep it then 23:18 I am sorry I said it should go 23:19 -!- vikram360 [~vikram360@117.192.163.162] has quit [Ping timeout: 276 seconds] 23:19 blackburn: no problem man! no need to say sorry ;) 23:20 indeed no need to be sorry 23:28 nite guyes! 23:28 bye! 23:28 good night 23:29 I actually have to go too. 23:32 I'll see what I can do with GP visualization. 23:32 cool 23:34 -!- puffin444 [230bf329@gateway/web/freenode/ip.35.11.243.41] has left #shogun [] 23:34 -!- blackburn [5bde8018@gateway/web/freenode/ip.91.222.128.24] has quit [Quit: Page closed] 23:35 another that abandons, good night people! 23:36 -!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] 23:36 -!- PhilTillet [~Philippe@npasserelle10.minet.net] has quit [Ping timeout: 264 seconds] 23:49 -!- PhilTillet [~Philippe@npasserelle10.minet.net] has joined #shogun 23:52 -!- PhilTillet [~Philippe@npasserelle10.minet.net] has quit [Remote host closed the connection] 23:53 --- Log closed Mon Apr 16 00:00:19 2012