Open in new window / Try shogun cloud
--- Log opened Sun Apr 15 00:00:19 2012
-!- karlnapf [] has quit [Quit: Leaving.]01:55
-!- blackburn [~qdrgsm@] has left #shogun []02:09
-!- harshit_ [~harshit@] has quit [Remote host closed the connection]02:10
-!- n4nd0 [] has quit [Quit: leaving]02:48
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]05:34
-!- wiking [] has joined #shogun09:11
-!- wiking [] has quit [Changing host]09:11
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun09:11
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]09:17
-!- wiking [] has joined #shogun09:43
-!- wiking [] has quit [Changing host]09:43
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun09:43
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection]09:43
-!- wiking [] has joined #shogun09:49
-!- wiking [] has quit [Changing host]09:49
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun09:49
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]10:10
-!- blackburn [~qdrgsm@] has joined #shogun10:20
-!- n4nd0 [] has joined #shogun10:28
n4nd0good morning!10:29
blackburneaster today here finally :D10:35
n4nd0good :)10:37
n4nd0is it holidays there too?10:38
-!- naywhayare [] has quit [Ping timeout: 272 seconds]10:38
-!- naywhayare [] has joined #shogun10:38
blackburnn4nd0: yes kind of10:41
n4nd0blackburn: bye bye to m_q then?10:48
blackburnn4nd0: yeah I think so10:48
n4nd0blackburn: should we wait and ask sonney2k?10:49
blackburnI don't think so10:49
blackburnn4nd0: can you imagine it is useful?10:49
n4nd0blackburn: well it is just an heuristic to give more importance to the class of the points that are closer10:50
n4nd0I think it actually makes sense10:50
blackburnit would give ~ result as reducing k10:51
n4nd0but to tell the truth I don't know if it makes a real difference in practice10:51
n4nd0yes exactly10:51
blackburnno need for that10:51
n4nd0all right10:51
n4nd0will drop it then ;)10:51
blackburnplease :)10:51
blackburnhahahahah putin came with moscow's mayor instead of wife lol10:54
blackburnn4nd0: check this shameful cults full of gold and opulence10:54
n4nd0yeah, I am watching it10:55
n4nd0I don't understand that much though :P10:55
blackburnn4nd0: just imagine how much money they spent on this golden shit10:56
blackburnn4nd0: and there is funny story that putin dumped his wife - and we were laughing he bring mayor instead of his wife :D10:57
blackburnit is ok to be gay lol10:58
blackburnn4nd0: btw legislatives here are going to introduce law that punishes gays because of gay advocacy10:59
blackburnhow wild is it for you?10:59
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun11:03
blackburnn4nd0: we should adv it to soeren11:04
n4nd0blackburn: yeah, we should do that!11:04
blackburnn4nd0: btw do you like libs extraction?11:04
n4nd0blackburn: what is it?11:05
blackburnn4nd0: this one ^11:05
n4nd0blackburn: aham, so to move code from another libraries to an specific directory?11:06
n4nd0blackburn: yes, I like it - I think the project is better ordered like that11:07
n4nd0blackburn: is there any con?11:07
blackburnI don't see any11:07
n4nd0I agree then11:08
n4nd0I should probably move the cover tree there11:10
blackburnn4nd0: to avoid some merge issues better stay it there11:12
blackburnand we will move it later11:12
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection]11:18
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun11:18
n4nd0blackburn: I like the article about containers11:19
blackburnyeah this faq is nice11:20
blackburnthere are other faqs covering whole C++11:20
n4nd0blackburn: I'd actually like to see more C++ style in the project11:21
blackburnany other suggestions?11:22
n4nd0I like the STL too11:24
n4nd0and as I think I told you once and pluskid also suggested, smart pointers could be a good idea11:25
blackburnsmart pointers - yes, agree11:25
n4nd0blackburn: for algorithms and for the containers11:25
n4nd0blackburn: to use iterators11:26
blackburnI would like to see a good example of <algorithm> usage11:26
n4nd0like how to use some of the functions there?11:27
wikingblackburn:  :))))11:27
wikingsonney2k: does not like STL afaik11:28
wikingi've told 2 months ago to have iterators on various types like sgvector or sgmatrix would be great to see11:28
blackburnn4nd0: yes I can't see because I do not think in the <algorithm> way :D11:28
blackburnwiking: we have iterators on dot features11:28
n4nd0we should buy a present to sonney2k11:29
blackburnsend one  for me please too11:30
wikinglol man i've completely overseen this function set_combined_kernel_weight11:30
blackburnokay nevermind I downloaded a torrent lol11:30
n4nd0blackburn: be careful man, logged chat :P11:31
blackburnn4nd0: recall the country I am from11:31
n4nd0in that book they recommend and talk a bit about for_each11:32
wiking<algorithm> is cool11:32
wikingbut then again11:32
wikingit really depends11:32
n4nd0I have never used myself it but it looks nice11:32
wikingsometimes the whole STL thing is a bit like: "shooting on a bird with a cannon"11:32
wikingso sometimes you are just better of with valarray11:35
wikingit's much faster than any other STL container tyoe11:35
wikingso for example in case of SGVector having a valarray would be good11:38
n4nd0wih valarray so mena a normal T*?11:38
n4nd0I'm sorry I didn't understand properly :S11:38
n4nd0and what about to use the array class in C++11 for that?11:41
wikingmmm valarray by far the fastst11:41
wikingand simply have a vector like11:41
wikingvalarray<float64_t> vec11:41
wikingand still you can have some funky operations on them11:42
wikingwith the help of apply function11:42
wikingn4nd0: and the problem with array that its c++1111:46
wikingso you'll have some problems with code portability11:46
wikingwhile valarray is standard C++11:46
n4nd0I had no idea about this valarray11:47
n4nd0I think it is cool11:47
wikinganyhow valarray is just a good container when you have SIMD operations on your machine... so like any normal cpu nowadays11:47
wikingmmx, sse, sse211:47
wikingany gcc will be able to optimize your code for using SIMD11:48
wikingwith valarray11:48
wikingbut valarray is basically good for built-in types... like float, int etc11:50
wikingit's not a good container for objects or pointers for objects11:50
wikingbasically it's not meant for that :))11:50
n4nd0I see11:50
wikingfor that u use some standard STL container11:50
wikingmaaaan amazing :)12:21
wikingdone it :)12:22
wikingblackburn: what ya say about the valarray?12:22
blackburnwiking: I heard it is something like bringing fortran speed to C++?12:22
wikingyou must have read stackoverflow12:23
blackburnyeap :D12:23
blackburnwiking: I need to override protected method of base of base class in derived. how do you think it should be done?12:28
wikingso you want to override the function a in class c where c : b, b: a12:29
wikinga sorry this was stupid12:29
wikingso you want to override the function x in class c where c : b, b: a and a.x ?12:30
wikingso that x is defined in a...12:30
wikingwhy would virtual not work?12:30
blackburnwiking: exactly12:30
blackburnwiking: it would work but it is not visible from c12:30
wikingah ok12:30
blackburnI guess I need to declare it in b?12:31
wikingwell in case of c a class is simply a struct no?12:31
blackburnwiking: damn that would be easy if it was public but I don't want to expose these methods :D12:32
wikingmmm wait a second12:33
blackburnhowever I need to redeclare it again in C12:33
wikinghow do you want to access that method at all in c?12:33
blackburnwiking: isn't it possible?12:33
wikingwell afaik no12:33
wikingi mean it's not going to create you function pointers or something12:33
wikingand simple c does not understand the syntax a.function ()12:34
wikingunless there's a function pointer there in struct a12:34
blackburnwiking: but it will work in case of public virtual12:34
wikingmmm have u tried?12:34
wikingi mean i'll try it for you now :)12:35
blackburnwiking: why not? I guess I did not describe it right12:35
blackburnI need to redefine method of A in C12:35
wikingyeah i got that12:35
wikingi'm just curious how will it work at all in ansi c12:36
blackburnthe only problem is visibility12:36
wikingwhile you are having classes12:36
wikingso for example my compiler dies on this :)12:37
wikingif u compile it with gcc12:38
wikingas if you want to have c12:38
blackburnwhat's this?12:38
wikingwell a very simple example trying to use a class in ansi c12:38
wikingthat this way it won't work12:39
wikingso i'm wondering how do you want to call a class and it's method in ansi c12:39
blackburnwhy ansi c?12:39
wikingaah fuck12:39
wikingblackburn: wiking: it would work but it is not visible from c12:39
blackburndamn :D12:39
blackburnwiking: no the problem is12:39
wikingi've got it12:39
wikingwhat's the problem now12:39
wikingwhy wouldn't it inherit12:40
blackburnI have protected virtual method in A12:40
wikingyes i got that now12:40
blackburnwill it be available from C?12:40
wikingi wonder why isn't it getting as well down to C12:40
blackburnhmm let me check again12:40
wikingclass friend is not transitive i know12:40
wikingbut inheritance of a fund...12:40
wikingworks for me12:43
blackburnhmm fine thanks12:44
wikingwhat was the problem?12:44
blackburnwiking: currently I have no idea if there was a problem :D12:45
wikingah ok12:45
wikinganyhow this what you've told should work12:45
wikingeven if it's not redefined in class B12:45
blackburnwiking: yes it seems so :)12:46
blackburnwiking: btw did you postponed latent stuff a little?12:48
wikingblackburn: no why?12:49
blackburnjust curious :)12:49
wikingblackburn: working on it...12:49
wikingjust had a chat with alex the other day12:49
wikingwe are finally on the same page12:50
wikingas obviously from our emails we had a misunderstanding12:50
blackburnI see12:50
wikingnow we 'realized' that yes actually we need the LatentFeatures12:50
blackburnwiking: btw where did you place your classes?12:50
wikingotherwise we couldn't optimize the latent part12:50
wikingblackburn: well now they are still there where i've committed them first12:51
blackburnI suggest latent both for latentfeatures and other latent stuff12:51
blackburnI mean12:51
wikingso you want to move everything under shogun/latent12:51
wikingat one place?12:51
wikingeven LatentLabel and LatentFeature ?12:51
blackburnwiking: why not?12:52
wikingi'm just asking12:52
blackburnI think it is a domain for that :)12:52
wikingit's a bit confusion12:52
blackburndo you think so?12:52
wikingi mean even to have the features and the labels12:52
wikingi got the point for having the machine separated12:53
blackburnn4nd0: what do you think about that ^?12:53
n4nd0blackburn: I have to go back in the conversation12:53
blackburnmy opinion is that base stuff should be as is but domains are separated12:53
wikingalthough maybe it'  should be something like12:53
wikingjust as in case of mkl12:53
blackburnn4nd0: do you think all the latent stuff should go shogun/latent?12:53
n4nd0blackburn: I am correcting a Spanish easy for my gf and I lost part of the conversation12:54
wikingn4nd0: :DDD12:54
blackburnn4nd0: no need to dig into whole conversation :)12:54
blackburnwiking: can't it be multiclass?>12:54
wikingwell it should be12:54
wikingfor me otherwise it's not much of a use12:55
blackburnor even structured output? :)12:55
wikingstructured output12:55
blackburnif so I think it should go to shogun/latent12:55
blackburnbecause it won't be 'classifier' in standart mean12:55
blackburnhowever let us discuss it later with soeren12:56
wikinglet me know12:56
wikingi'll continue as i've started earlier12:56
blackburnyes it is not hard change so12:56
blackburnno need to worry :)12:56
wikingyeah just a git mv12:56
-!- wiking_ [] has joined #shogun12:58
-!- wiking_ [] has quit [Changing host]12:58
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun12:58
-!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 265 seconds]13:01
-!- wiking_ is now known as wiking13:01
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection]13:03
-!- wiking [] has joined #shogun13:04
-!- wiking [] has quit [Changing host]13:04
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun13:04
n4nd0blackburn: why LatentFeatures wouldn't be ok in shogun/features?13:07
-!- pluskid [~pluskid@] has joined #shogun13:07
blackburnn4nd0: it would but it makes sense to store domain-specific things in separate folder (for me)13:08
n4nd0blackburn: yes, it actually makes sense too13:09
n4nd0blackburn: I guess that in this case, LatentFeatures, it is a matter of taste13:09
n4nd0blackburn: it probably makes sense both under latent/ and under shogun/features13:10
pluskidI moved MulticlassSVM.{h,cpp} to multiclass folder, but why I always get: *** No rule to make target `../../shogun/classifier/svm/MultiClassSVM.h', needed by `modshogun_wrap.cxx'.  Stop.13:11
pluskidI can't find where the path is specified13:11
blackburnpluskid: find shogn | xargs grep svm/MultiClassSVM.h13:11
blackburnfind shogun*13:11
n4nd0blackburn: deleting .o would solve that?13:12
blackburnn4nd0: yes and .cxx. but only in the case that MultiClassSVM is not referenced by some include13:12
blackburnpluskid: did you change include in multiclasslibsvm?13:13
pluskidI changed a lot13:13
pluskidbtw, the refactoring is a huge project13:13
blackburnpluskid: did you 'make clean'?13:13
pluskidthere are dozens of multiclass svm subclasses13:14
pluskidand even GUI staffs are related13:14
pluskidyes I did make clean13:14
pluskidI'll try again13:14
blackburnScatter, LaRank, GMNP, LibSVM13:14
blackburnah no13:14
blackburnshogun/classifier/svm/ScatterSVM.h:#include <shogun/classifier/svm/MultiClassSVM.h>13:14
blackburnshogun/classifier/svm/LibSVMMultiClass.h:#include <shogun/classifier/svm/MultiClassSVM.h>13:14
blackburnshogun/classifier/svm/MultiClassSVM.cpp:#include <shogun/classifier/svm/MultiClassSVM.h>13:14
blackburnshogun/classifier/svm/GMNPSVM.h:#include <shogun/classifier/svm/MultiClassSVM.h>13:14
blackburnshogun/classifier/svm/LaRank.h:#include <shogun/classifier/svm/MultiClassSVM.h>13:14
blackburnpluskid: ^13:15
pluskidI see13:15
blackburninterfaces/modular/Classifier_includes.i: #include <shogun/classifier/svm/MultiClassSVM.h>13:15
blackburninterfaces/modular/Classifier.i:%include <shogun/classifier/svm/MultiClassSVM.h>13:15
blackburnpluskid: so are you unhappy doing it ? :)13:16
pluskidblackburn, I'd rather not give up at this moment13:17
blackburnpluskid: it requires some careful work but it is really needed13:18
pluskidblackburn, yes, I see13:18
pluskidthere are lots of duplicated/similar code in MulticlassSVM13:18
pluskidand the new multiclass facility13:19
blackburnpluskid: similar to multiclass machine?13:19
blackburnpluskid: actually you may remove it13:19
pluskidI don't think so, too :)13:20
blackburnpluskid: I mean duplicated stuff13:20
blackburnpluskid: just derive it from kernel multiclass machine13:20
blackburnand add there required methods13:20
pluskidyes, that's what I'm doing13:20
blackburnpluskid: did you catch the idea of multiclass stuff?13:20
blackburni mean that generic thing13:20
pluskidI think yes13:20
blackburnwhile GMNP/LaRank/Scatter are all OvR13:21
blackburnit would be easy13:21
pluskidbut the generic multiclass training procedure is much different from our multiclass-svms13:21
pluskidit seems13:21
blackburnit should be overrided with svms13:21
pluskidso GMNP/LaRank/Scatter all cannot support OvO?13:21
blackburnpluskid: no13:21
blackburnGMNP is modified Weston-Watkins formulation13:22
blackburnand LaRank is primal Crammer-Singer13:22
blackburnso no way to do OvO13:22
n4nd0see you later guys13:23
pluskiddidn't know those methods13:23
blackburnpluskid: for example in gmnpsvm all you need is to change set_svm part13:23
pluskidcu n4nd013:23
blackburnn4nd0: see you13:23
-!- n4nd0 [] has quit [Quit: leaving]13:23
blackburnI mean training will stay13:23
blackburnall you need is to set machines13:23
-!- ishaanmlhtr [~chatzilla@] has joined #shogun13:24
pluskidcurrently, I'm using some regression test to assure the result before and after refactoring is the same13:24
blackburnpluskid: the formulations are13:24
blackburnC-S: \sum_m ||w_m||^2 + C \sum_i \xi_i      s.t.    <w_yi,x_i>-<w_m,x_i>\geq 1 - \delta_yi,m + \xi_i13:25
blackburnW-W: \sum_m ||w_m||^2 + C!!! \sum_m !!! \sum_i \xi_i      s.t.    <w_yi,x_i>-<w_m,x_i>\geq 1 - \delta_yi,m + \xi_i13:25
blackburnm = 0, ... , K  --- classes13:25
blackburni = 1, ... , N --- train vectors13:25
blackburnthe only difference is class slacks13:26
pluskidwhat is C-S and W-W denoting?13:27
pluskidah, thanks for the reference13:28
blackburnpluskid: W-W is rather infeasible but GMNP does a magic trick for that13:29
pluskidmagic? :D13:30
blackburnit maps the problem to single-class one using kesler's transformation13:30
blackburnyes kind of13:30
pluskidhmm, lots of magic for me :D13:32
pluskidI won't understand all those before reading the papers13:32
blackburnok have to go13:36
blackburnpluskid: see you13:36
-!- blackburn [~qdrgsm@] has left #shogun []13:36
-!- PhilTillet [~Philippe@] has joined #shogun13:48
-!- ishaanmlhtr [~chatzilla@] has quit [Quit: ChatZilla [Firefox 9.0.1/20111220165912]]13:49
-!- pluskid [~pluskid@] has quit [Ping timeout: 260 seconds]14:06
-!- pluskid [~pluskid@] has joined #shogun14:07
-!- Netsplit *.net <-> *.split quits: naywhayare15:07
-!- Netsplit *.net <-> *.split quits: shogun-buildbot, PhilTillet, pluskid, CIA-64, @sonney2k, wiking, vikram36015:07
-!- Netsplit over, joins: pluskid, PhilTillet, wiking, naywhayare, vikram360, shogun-buildbot, CIA-64, @sonney2k15:10
-!- Netsplit *.net <-> *.split quits: naywhayare, pluskid, PhilTillet, shogun-buildbot, CIA-64, wiking, @sonney2k, vikram36015:11
-!- Netsplit over, joins: pluskid, PhilTillet, wiking, naywhayare, vikram360, shogun-buildbot, CIA-64, @sonney2k15:17
-!- blackburn [~qdrgsm@] has joined #shogun17:05
-!- pluskid [~pluskid@] has quit [Quit: Leaving]17:14
-!- blackburn [~qdrgsm@] has quit [Ping timeout: 252 seconds]17:39
-!- n4nd0 [] has joined #shogun18:13
-!- n4nd0 [] has quit [Ping timeout: 245 seconds]18:24
-!- PhilTillet [~Philippe@] has quit [Ping timeout: 265 seconds]18:38
-!- n4nd0 [] has joined #shogun18:57
-!- PhilTillet [] has joined #shogun19:13
-!- flxb [] has joined #shogun19:32
flxbIf I use a kernel normalizer and init the same kernel object with train data first and test data later, can i be sure the normalization is done properly? so that test and train data are normalized the same way?19:33
flxbalso, does shogun provide methods to center data in feature space?19:34
-!- n4nd0 [] has quit [Quit: leaving]19:52
-!- PhilTillet [] has quit [Remote host closed the connection]19:56
-!- blackburn [5bde8018@gateway/web/freenode/ip.] has joined #shogun20:15
blackburnflxb: 1) yes while normalizers are applied on-the-fly 2) PruneVarSubMean should work for that20:15
flxbblackburn: PruneVarSubMean centers the input features, correct? Is there a way to center in kernel feature space?20:17
blackburnflxb: uh you mean center kernel matrix?20:18
flxbblackburn: yes20:18
blackburnflxb: I have to ask you how big is your kernel?20:20
flxbblackburn: 3k x 3k20:20
flxbnot very big20:20
blackburnah nevermind I find it20:20
blackburn(I was thinking about suggesting to compute whole matrix and center it)20:21
flxbahh, there is a normalizer for that? now how can i apply to normalizers? zero mean and avg diag20:21
blackburnflxb: what do you mean?20:22
blackburnyou need both?20:22
flxbi want to center the kernel and normalize it20:22
@sonney2kblackburn, hmmhh where do you draw the line with the 'external' patch?20:23
blackburnsonney2k: what line? how to choose if lib is external?20:23
@sonney2kfor example svmlight is also taken from other source, SVMLin too20:23
blackburnsonney2k: ah I just was lazy to complete that20:24
@sonney2klibsvm is heavily modified, svmlight too20:24
blackburnand wanted to show proof-of-concept20:24
blackburnsonney2k: yes modifications are marked with shogun_*20:24
blackburnflxb: I am afraid this is not really possible now20:24
blackburnhowever I can suggest a trick for that20:25
blackburnyou may attach first normalizer and compute custom kernel with it20:25
blackburnand the attach second to the custom kernel20:25
flxbblackburn: yes this is a good idea20:25
@sonney2kblackburn, ok20:25
blackburnsonney2k: do you like it?20:25
@sonney2kI don't really understand what this is for though?20:26
@sonney2kwhat does it help?20:26
blackburnsonney2k: better structure20:26
blackburnI was (and others were) confused with a lot of .h and .cpp in svm folder20:26
blackburnanother issue is cross-dependencies of stuff like ocas20:27
blackburnmulticlass ocas was referencing to classifier/svm/libocas20:27
flxbblackburn: now, i would do: k = GaussianKernel(), k.set_normalizer(), k.init(train,train), k_tr = k.get_kernel_matrix(), k.init(train, test), k_te = k.get_kernel_matrix(). Would this work? I'm worried that train and test kernel have a different normalization afterwards...20:27
@sonney2kblackburn, yeah but isn't it more natural to include shogun/classifier/svm/Tron.h than sth in lib/external ?20:28
@sonney2kfor users of libshogun I mean20:28
blackburnI don't know - they should know what tron is20:28
blackburnflxb: should work I think however that can be different.. sonney2k can you help out there?20:29
@sonney2kflxb, yes that will work init is done based on lhs only if there is some train data specific stuff to do20:31
blackburnwow farbrausch code20:31
flxbsonney2k, blackburn: cool thanks you two! i will try it out20:31
blackburnI always wanted to check this demoscene code20:32
@sonney2kblackburn, yeah crazy... and he blogged about some cooperative garbage collector significantly simplifying code w/o speed loss20:33
@sonney2kand no longer incref / decref :)20:33
blackburnsonney2k: these guys can teach us a lot of things :D20:34
@sonney2kblackburn, regarding lib/external I have a different opinion: I think we should put everything in there that is not meant to be seen from the outside20:35
@sonney2kbut not the things that make sense to be directly included20:35
@sonney2kfor example: if we have a severely modifed CLibSVM class that is meant to be directly used then it should stay in the hierarchy20:37
@sonney2kthe SVM_libsvm.{cpp,h} stuff should be in lib/external20:38
blackburnsonney2k: could you please provide example what should stay and what should go to lib/external?20:38
@sonney2kthat was the example ^20:39
blackburnuhoh what's with me20:39
blackburnI can't understand :D20:39
@sonney2kCTron / LibLinear would stay20:39
@sonney2klibqp would go20:40
blackburnbecause of serious modifications?20:40
@sonney2kit is CLibLinear now20:41
@sonney2kSVM_linear* would go20:41
blackburnsonney2k: ok is there any library I moved that should go back?20:41
blackburnsonney2k: because of?20:44
wikinghas there been any decision now on where the latent stuff should reside?20:45
blackburnwiking: no but thanks for remind :)20:45
wikingheheh sonney2k read logs ;)20:45
blackburnsonney2k: I mean Tron is not visible from the outside and it is external code, why should it stay here20:46
-!- gsomix [~gsomix@] has joined #shogun20:53
gsomixhi there20:53
@sonney2kblackburn, it is a CSGObject derived object that can be used from external code20:53
blackburnsonney2k: makes sense then20:55
@sonney2kwiking, ?20:55
blackburnsonney2k: ok another issue - do you agree all the latent stuff20:55
blackburnshould go to shogun/latent20:55
wikingsonney2k: blackburn is just explaining it...20:55
@sonney2kblackburn, why not have it under features and classifier as normal?20:56
@sonney2kI mean we don't have shogun/dot or anything20:56
blackburnsonney2k: I think that domain specific separation is nice20:56
@sonney2kso a few more classes dont' hurt20:56
@sonney2klatent? no? domainadaptation -> yes20:57
blackburnso for transfer learning stuff you agree?20:57
@sonney2kyes - there is a sh*tload of algorithms for that20:58
@sonney2kso it makes sense to have them in separate dir20:58
blackburnshitload? it seems you like transfer learning :D20:58
@sonney2kit usually doesn't gain you anything20:58
@sonney2k(like MKL)20:58
blackburnsonney2k: 'like'20:59
@sonney2kthere are exceptions to the rule but the costs are exorbitantly massive20:59
wikingsonney2k: great to see that you even had a JMLR article about it ;P21:00
blackburnsonney2k: but chris reported some gain ;)21:00
blackburnwith tree stuff21:00
-!- harshit_ [~harshit@] has joined #shogun21:00
@sonney2kwiking, 2 even!21:01
wikingand you are 'trashing' it on irc :)))21:01
blackburnsonney2k: that is something funny :)21:01
blackburnsonney2k: I am afraid to ask what do you think about ocas?21:01
@sonney2kwiking, in the first article we just developed a faster algorithm to solve MKL21:02
blackburnand cutting planes in general?21:02
@sonney2kand we used it to understand the classifier21:02
@sonney2knot to improve learning result - which mkl only rarely does21:02
@sonney2kin second article (Lp norm mkl) we managed to improve some state-of-the-art results *slightly*21:03
wikingand that's still worth a jmlr article :)21:03
wikingso nice21:03
wikingi really need one journal article soon :)21:03
blackburnand my $#@$@ work doesn't worth :E21:04
@sonney2kthe papers have a lot more than that21:04
wikingheheheh yeah i guessed21:04
@sonney2kbut that is the essence of MKL21:04
-!- PhilTillet [] has joined #shogun21:04
@sonney2kif you want to use it - try L2-norm MKL21:04
wikingi've treid21:04
wikingtried... was giving really bad results :(21:05
@sonney2kthat will always improve or stay the same (at least on data I had)21:05
@sonney2ksmells like a bug then21:05
@sonney2kwiking, I guess you used multiclass? no idea if alex binders implementation is well tested21:05
wikingdunno really i've just tried MKL on a dataset... i'm getting 7-8% worse results (in accuracy) than with MKL21:06
@sonney2kwe did only binary21:06
wikingi was mening21:06
@sonney2kparse error...21:06
wikingmeaning worse results than with simple combined kernel21:06
wikingand GMNP21:06
wikinggmnp with combined kernel is better than mkl21:07
wikingthe only thing is that if i want the 'best' result than i have to do a parameter search on the weights for the various kernels21:08
@sonney2kwiking, the (unweighted) combined kernel is *very hard* to beat21:15
flxbwhy is it called set_full_kernel_matrix_from_full and not simply set_kernel_matrix? does it do something special?21:17
blackburnsonney2k: !! I think we should remove m_q21:18
blackburntotally useless21:18
blackburnflxb: it can be set full from triangle21:18
blackburnand triangle from full21:19
blackburnsome cases there21:19
blackburnsonney2k: no I shall convince you to remove q21:22
-!- puffin444 [230bf329@gateway/web/freenode/ip.] has joined #shogun21:23
puffin444How's it going?21:25
@sonney2kblackburn, why is it useless?21:26
blackburnsonney2k: it is VERY similar to reducing k21:26
blackburnsonney2k: small q leads to k=1 virtually21:27
blackburnit is not of much interest when you do knn21:27
@sonney2kblackburn, why is the buildbot failing?21:27
blackburnI didn't know it is failing :D21:27
@sonney2kpuffin444, strong currents currently :)21:28
@sonney2kblackburn, Re q I don't see this as a problem21:30
blackburnsonney2k: hmm I thought I fixed that21:30
blackburnsonney2k: it is useless21:30
@sonney2kblackburn, so q>=1 has no effect on solution right?21:31
blackburnsonney2k: q>=1??21:31
@sonney2kexcept if distance < 121:31
blackburnit will be farest neighbor21:31
blackburnsonney2k: that is not related to distances21:32
blackburnwhile it is a multiplier21:32
* sonney2k checks the formula21:36
blackburnsonney2k: with q=1 it +1 to class histogram21:36
blackburnwith q=0.5 it +0.5, +0.25, ... to class histogram21:37
@sonney2kahh so points far away get less influence21:37
blackburnyes sounds consistent but it makes no sense in practice21:37
@sonney2kwhy not?21:37
blackburnbetter use parzen windows for that21:37
@sonney2kthat is not really a reason21:38
blackburnI don't think this kind of weighting makes sense21:38
@sonney2kI do21:39
blackburnsonney2k: and it makes things harder while covertree do not place vectors in order21:39
@sonney2kyeah but that doesn't really matter21:40
@sonney2kI mean we can just have a speed up for q=121:40
@sonney2kand otherwise default to what we had21:40
blackburnsonney2k: ah okay it is hard to convince you :)21:40
@sonney2kwhich we should keep anyways21:40
@sonney2kblackburn, well you implemneted this and it sounds like it makes a lot of sense to me21:41
blackburnsonney2k: I still think it is a stupid heuristics21:41
@sonney2kit protects you a bit from outliers - so I could imagine higher k to be useful in k-NN (I'ver never seen k>7 or so to be useful for q=1)21:44
blackburnsonney2k: with k=10 and q=0.5 farest vector will have weight 1e-5 :)21:45
@sonney2kwith C=1e-5 SVM alphas will be mostly 021:46
blackburnit is the same like there was no vector at all21:46
@sonney2kobviously 0.5 is an extreme choice already...21:47
blackburn0.9? 0.99?21:47
blackburnalmost no influence at all21:47
@sonney2k0.9**10 -> 0.3421:48
@sonney2kwhat to do with harshits TSVM?21:49
blackburnsonney2k: ask harshit_ :D21:49
@sonney2kblackburn, what I meant is have you looked at the code?21:54
@sonney2kPhilTillet, I think for nearest centroid it doesn't make sense to have a store_model_features method at all21:55
blackburnsonney2k: no while he said it is incomplete21:55
@sonney2kI mean you have to store the centroids21:55
@sonney2kthat is what was meant with store model features21:56
PhilTillethmm okay, and I have no real choice but to store them during training :p21:57
@sonney2kin the context of e.g. SVM it makes sense to not copy all the SVs (because there can be many and they can be huge) but just their indices21:57
PhilTilletgot it21:57
@sonney2kso store_model_features has a meaning :)21:57
@sonney2kPhilTillet, ok so please put the call to kernel init back in and all good :)21:57
@sonney2kPhilTillet, another thing - you assume that you are getting CSimpleFeatures - but you should check (at least add some ASSERT()'s checking for get_feature_class() = CT_SIMPLE())22:01
@sonney2kPhilTillet, ahh and I am wondering where m_shrinking is used?22:02
PhilTilletAh, I had a question concerning that :D (not yet implemented)22:03
PhilTilletshould the centroids be shrunken after training, or shrunken when classifying?22:03
PhilTilletfirst option make classification faster but second option avoid re-training when changing m_shrinking22:04
PhilTilletso I don't really know :p22:04
@sonney2kPhilTillet, I guess doing both would resolve the conflict :)22:05
@sonney2ki.e. do it in training22:06
@sonney2kand add some apply_shrinked()22:06
@sonney2kfunction that would do the shrinking would do22:06
gsomixgood night, guys22:11
-!- flxb [] has left #shogun []22:14
@sonney2kgsomix, nite22:15
puffin444Is it night for most people here?22:18
wikingsonney2k blackburn so we have a consensus that the latent stuff will go to the usual places, i.e. latentfeatures and latentlabel class under shogun/features and the latentlinearclassifier under classifier/svm/ ?22:21
@sonney2kwiking, yes22:24
wikingok cool22:24
@sonney2kpuffin444, yes 22hrs  here and 24hrs in russia :)22:24
-!- n4nd0 [] has joined #shogun22:26
@sonney2khi n4nd022:26
n4nd0hey sonney2k22:26
n4nd0how is it going?22:26
n4nd0and hi to everyone else too :)22:27
@sonney2kn4nd0, just had a look at your patch - nice job :)22:31
@sonney2kpuffin444, can we get some nice shiny graphical example for your GP implementation - I think olivier has sth in his (python based) toolbox :)22:31
n4nd0sonney2k: thanks! I have your messages, I will start making the modifications and continue with the job this next week22:31
-!- blackburn [5bde8018@gateway/web/freenode/ip.] has quit [Ping timeout: 245 seconds]22:33
puffin444That's something I could try. Is there visualiztion stuff in the Python extensions?22:35
@sonney2kpuffin444, not really - just use the excellent matplotlib - it's gallery/examples certain have what you need :)22:44
puffin444Sounds good.22:46
n4nd0I think there is a nice plot in scikits for GPs22:46
n4nd0puffin444: you could use it as inspiration22:46
puffin444I'll take a look at it.22:47
@sonney2kn4nd0, heh - they certainly have some nice documentation22:48
@sonney2kI don't understand why no one here wants to document things22:48
n4nd0sonney2k: you mean within the code or in web format?22:48
@sonney2keven though it is much easier just to write a little doxygen code no one writes complex formulas or anything in the class descripton22:49
n4nd0sonney2k: aham I see22:50
n4nd0personally I like more something like the one in scikits22:50
n4nd0I think they do it with sphinx22:51
n4nd0sonney2k: would you like to see that in shogun?22:52
@sonney2kn4nd0, the problem is more that someone has to write all this stuff22:53
@sonney2k*and* keep it up-to-date22:53
n4nd0maybe we could start with it step by step22:53
harshit_sonney2k: hey, TSVM is not complete for now. I think there is some problem with tsvm part of ssl.cpp, which currently I am figuring out.22:54
harshit_also due to university tests i am not able to give much time so hopefully that would be done by next sunday22:54
puffin444Yeah it looks pretty, but I don't know how much time a scikit-like graph would take.22:56
n4nd0puffin444: do you mean to code it?22:56
@sonney2kharshit_, ok np! university tests are certainly more important!22:56
puffin444Yeah I don't know if I can code this week or not.22:56
puffin444*code it*22:57
n4nd0puffin444: aham, all right22:57
n4nd0puffin444: anyway, in case you didn't see it, here it is the code to do that figure22:57
n4nd0puffin444: if the method you have implemented in shogun has a similar interface to the one there, you wouldn't need to change much more than the actual part to train the GP22:58
n4nd0all the plot stuff is there22:59
n4nd0you can change the colours, the function to predict and you have there a complete new example of your own :)22:59
puffin444Oh I see.23:00
n4nd0puffin444: it is just a suggestion of course, I hope you don't mind I said it!23:00
puffin444No that's fine. Yes this might work. My time is just somewhat limited as I have a major exam next week.23:01
-!- blackburn [5bde8018@gateway/web/freenode/ip.] has joined #shogun23:11
blackburnn4nd0: have you seen sonney2k wants q in knn?23:15
n4nd0blackburn: yeah23:15
n4nd0blackburn: I told you it could be better to wait ;)23:16
n4nd0blackburn: no way to convince him about it?23:16
blackburnn4nd0: I am still think it should go :D23:17
blackburnn4nd0: I don't know probably no way :D23:17
n4nd0haha ok, let's keep it then23:18
blackburnI am sorry I said it should go23:19
-!- vikram360 [~vikram360@] has quit [Ping timeout: 276 seconds]23:19
n4nd0blackburn: no problem man! no need to say sorry ;)23:20
@sonney2kindeed no need to be sorry23:28
@sonney2knite guyes!23:28
n4nd0good night23:29
puffin444I actually have to go too.23:32
puffin444I'll see what I can do with GP visualization.23:32
-!- puffin444 [230bf329@gateway/web/freenode/ip.] has left #shogun []23:34
-!- blackburn [5bde8018@gateway/web/freenode/ip.] has quit [Quit: Page closed]23:35
n4nd0another that abandons, good night people!23:36
-!- n4nd0 [] has quit [Quit: leaving]23:36
-!- PhilTillet [] has quit [Ping timeout: 264 seconds]23:49
-!- PhilTillet [] has joined #shogun23:52
-!- PhilTillet [] has quit [Remote host closed the connection]23:53
--- Log closed Mon Apr 16 00:00:19 2012