--- Log opened Sat Mar 31 00:00:19 2012
-!- Vuvu [~Vivan_Ric@] has quit [Quit: Leaving]00:23
-!- blackburn [~qdrgsm@] has quit [Ping timeout: 246 seconds]00:31
n4nd0sonney2k: hey, still around? I am getting some weird results with one-vs-one and I don't know if they might make actually sense00:31
@sonney2kn4nd0, what is the problem?00:34
n4nd0sonney2k: so I am getting pretty bad classification results with one-vs-one using LibSVM + GaussianKernel00:35
n4nd0sonney2k: one-vs-rest success though00:35
@sonney2kn4nd0, you could try to compare results with LibSVMMultiClass00:35
@sonney2kit does OvO too00:35
n4nd0sonney2k: ok, thank you!00:36
@sonney2kso results should be the same or similar at least00:36
@sonney2kn4nd0, did you copy the OvO routine from CMultiClassSVM ?00:36
n4nd0sonney2k: yeah00:36
PhilTilletsonney2k, I think I have a reason for the OpenCL being slow on your computer00:36
@sonney2kPhilTillet, ok so how do I turn it on then?00:37
PhilTillethmmm, do you have the proprietary NVidia Drivers?00:38
PhilTilletif so, then you should have your libopencl.so somewhere in an nvidia folder, for me it is /usr/lib/nvidia-current/libopencl.so00:39
@sonney2kdoesn't help00:43
@sonney2kPhilTillet, any way to debug whether opencl is used?00:43
PhilTilletyou mean on which device it is used?00:44
PhilTilletthere is a viennacl function for that, hmm wait a bit00:44
n4nd0sonney2k: the results with LibSVMMultiClass are good, classification correct there must be erros in one-vs-one :S00:45
PhilTilletsonney2k, std::cout << viennacl::ocl::current_device().info();00:45
@sonney2kn4nd0, how do you map labels?00:45
@sonney2kn4nd0, they should be in range 0...<nr classes-1>00:46
@sonney2kand then you need to map them to +1 / -100:46
n4nd0n4nd0: yeah, that's how I do it00:46
n4nd0sonney2k: let me show you the code00:46
PhilTilletn4nd0, you send messages to yourself? :p00:46
n4nd0shit, again talking to me :P00:46
n4nd0yeah ... I do it more often I want to admit00:47
@sonney2kPhilTillet, CL Device Vendor ID: 409800:47
@sonney2kCL Device Name: Intel(R) Core(TM)2 Duo CPU     T9900  @ 3.06GHz00:47
@sonney2kCL Driver Version: 2.000:47
@sonney2kso you are right00:47
@sonney2kquestion is why the nvidia one is not taken?!00:47
PhilTilletthere can be several reasons00:47
PhilTilletdo you have an nvidia.icd in your /etc/OpenCL/vendors/ ?00:48
@sonney2kyes and an amdocl64.icd00:49
PhilTilletI think the reason why is that the program is linked to AMD's ocl00:49
PhilTilletAMD's OpenCL implementation does not detect the NVidia Card as a device00:49
PhilTilletand NVidia's OCL does not detect the Intel CPU00:50
PhilTillet(yes, it's sort of a headache XD)00:50
@sonney2klet me remove the amd one00:50
@sonney2kCL Device Vendor ID: 431800:50
@sonney2kCL Device Name: GeForce 9600M GT00:50
@sonney2kCL Driver Version: 295.2000:50
@sonney2kCPU Apply time : 0.48297600:50
@sonney2kGPU Apply time : 0.24934900:50
n4nd0sonney2k: there it is train_one_vs_one and classify_one_vs_one00:50
PhilTilletthat was the reason00:50
@sonney2kCL Device Max Compute Units: 400:51
@sonney2kPhilTillet, now benchmark for 1000x100k and get the error down to <1e-6 and all good00:52
PhilTilletsonney2k, okay :p00:52
PhilTilleti'll do some tests on small inputs00:53
@sonney2kn4nd0, where is that?00:53
PhilTilletto see where the error comes from00:53
@sonney2kmakes sense00:53
n4nd0sonney2k: https://github.com/shogun-toolbox/shogun/blob/master/src/shogun/machine/MulticlassMachine.cpp00:53
PhilTilletsonney2k, float64_t* lab;00:57
PhilTilletfloat32_t* feat;00:57
PhilTilletfloat64_t* alphas; ... are you sure the computation is not done in double on the cpu?00:57
@sonney2kPhilTillet, only alphas00:57
@sonney2kbut try on your monster GPU with double00:57
PhilTilletlol, monster GPU :p00:57
@sonney2kthen difference should be 1e-1600:57
PhilTillet1e-16 per example or total?01:00
@sonney2kPhilTillet, total01:01
@sonney2kn4nd0, I suspect subsets01:01
n4nd0sonney2k: why do you think so?01:03
@sonney2kn4nd0, if you print the votes vector - does it show sth suspicious like just one class?01:03
n4nd0sonney2k: yes01:04
PhilTilletsonney2k, so 100K vectors of dim 1000 and 1000 Support vectors?01:04
@sonney2kPhilTillet, something that fits in memory :)01:04
@sonney2kmaybe you have to reduce dim01:05
@sonney2kto 100 or so01:05
PhilTilletyes, I'll see01:05
PhilTilletBenchmarking cpu...01:05
PhilTilletCPU Apply time : 18.719501:05
PhilTilletCompiling GPU Kernels...01:05
PhilTilletBenchmarking GPUs..01:05
PhilTilletGPU Apply time : 2.1984501:05
PhilTilletTotal  error : 0.00101501:05
PhilTilleton i7 950 vs GTX 47001:06
PhilTilletnow got to reduce the error, but it will be harder :p01:06
@sonney2kwell try with 2 dims and 1 SV and 1 output :)01:06
PhilTilletyes, I tried01:06
PhilTilletsome vectors have 0.0000000 error01:06
PhilTilletsome have like 1e-601:07
PhilTilletlike the last digit of the floating point representation01:07
@sonney2kn4nd0, I think the reason is that features can only have one subset01:07
@sonney2kI think you would need to store the subset too01:07
@sonney2kand assign it in apply01:07
@sonney2kn4nd0, or do everything without subsets01:08
n4nd0sonney2k: I don't understand clearly, do you mean that the subsets cannot be set and remove for every machine?01:08
n4nd0sonney2k: how could it be done without subsets?01:09
@sonney2kyou assign the subset to the features right?01:09
@sonney2kso there can be only one such assignment01:10
@sonney2kwhich will be the last one01:10
n4nd0set_machine_subset is define in LinearMulticlassMachine or KernelMulticlassMachine and it basically calls the method set_subset of for the features01:10
@sonney2kyeah but does KernelMulticlassMachine store the subset?01:11
@sonney2kor just assign it to features?01:11
n4nd0no, it doesn't store it01:11
n4nd0assign to features01:11
@sonney2kif it assigns things to features only then that is the explanation01:11
@sonney2kbecause in apply() you won't have the correct subset set01:12
n4nd0aham I see01:12
@sonney2kso support vectors etc need to be converted back to the features w/o subset01:12
@sonney2kI guess that is the best solution01:12
@sonney2kthere should be a method for that btw01:13
n4nd0but since the subset is removed at the end01:13
n4nd0then the subset shouldn't be there when apply is called right?01:13
@sonney2kn4nd0, you could call CKernelMachine::store_model_features()01:14
@sonney2kthen it will store the SVs01:14
@sonney2kand all good01:15
n4nd0mmm I think I am not getting the point correctly, I am sorry01:15
@sonney2kn4nd0, yeah but the problem is that the kernel machine doesn't internally store the trained machine01:15
@sonney2kit only stores indices to support vectors and alphas01:15
@sonney2kso these sv-indicices will be relative to the subsets that were used01:16
@sonney2kso the best solution is to make the svm store not only the sv-indices but the real feature vectors01:16
@sonney2kwhich is what the store_model_features() call will do01:17
n4nd0but why does storing the feature vectors will solve the problem?01:17
PhilTilletsonney2k, double on alphas make things better but there is still a small error at the end01:17
@sonney2kPhilTillet, and with double?01:17
PhilTilletBenchmarking cpu...01:17
PhilTilletCPU Apply time : 18.745601:17
PhilTilletCompiling GPU Kernels...01:17
PhilTilletBenchmarking GPUs..01:17
PhilTilletGPU Apply time : 2.195501:17
PhilTilletTotal  error : 0.00001501:17
PhilTilletwith 100k of 100001:17
PhilTilletand double on alphas01:18
PhilTilletdon't know where the rounding error takes place01:18
@sonney2kPhilTillet, and when you use just 1k instead of 100k and fewer dims?01:18
@sonney2kn4nd0, then the SVM is self-contained01:19
PhilTillet1k SV ?01:19
PhilTilletoh my01:19
PhilTilletvery bad01:19
PhilTilletTotal  error : 0.00601801:19
@sonney2kn4nd0, it won't store indices relative to anything but the features themselves01:19
@sonney2kPhilTillet, that sounds like a bug somewhere01:19
n4nd0sonney2k: ok, I think I am starting to see the point01:20
@sonney2kPhilTillet, hopefully in your opencl code :D01:20
PhilTilletlol :D01:20
n4nd0sonney2k: I have to think about how to implement it now, since the training occurs in CMulticlassMachine and bot CLinearMulticlassMachine and CKernelMulticlassMachine derive01:21
PhilTilletwhen there's an error it's always on the last digit01:21
n4nd0sonney2k: but we just want to this for CKernelMulticlassMachine01:21
PhilTilletand no error for out[0] to out[2]01:21
PhilTillettotal_diff was a float01:22
@sonney2kPhilTillet, bah01:22
@sonney2kn4nd0, just do: set_store_model_features(true);01:23
@sonney2kthen it will do that for you :)01:23
@sonney2kno need to do any extra magic01:24
@sonney2kn4nd0, btw one problem later might be that there is already a subset set01:24
@sonney2kn4nd0, so one would need nested subsets %-)01:24
n4nd0sonney2k: but isn't it enought to remove the subsets for every machine that is trained?01:25
n4nd0once the machine is trained, it removes the subsets01:25
@sonney2kn4nd0, that is what set_store_model_features(true); will automagically do01:26
@sonney2k+ store the features01:26
@sonney2kthat will be expensive for one-vs-rest though01:26
@sonney2kfor that it is much better to just store indices01:27
PhilTilletsonney2k, seems like the problem is a bit more complicated01:27
n4nd0sonney2k: one vs rest doesn't need subsets I think01:27
PhilTilletfor NUM 15 and DIM 3 :01:27
PhilTilletDifference between GPU and CPU at pos 0 1.64044e-0701:27
PhilTilletDifference between GPU and CPU at pos 1 4.05671e-0801:27
PhilTilletDifference between GPU and CPU at pos 2 1.7945e-0701:27
PhilTilletDifference between GPU and CPU at pos 3 2.29995e-0701:27
PhilTilletDifference between GPU and CPU at pos 4 1.46e-0701:27
PhilTilletDifference between GPU and CPU at pos 5 3.52479e-0801:27
PhilTilletDifference between GPU and CPU at pos 6 2.49558e-0701:27
PhilTilletDifference between GPU and CPU at pos 7 6.59237e-0801:27
PhilTilletDifference between GPU and CPU at pos 8 2.7401e-0701:27
PhilTilletDifference between GPU and CPU at pos 9 1.81766e-0801:27
PhilTilletDifference between GPU and CPU at pos 10 1.48443e-0701:27
PhilTilletDifference between GPU and CPU at pos 11 3.06445e-0701:27
PhilTilletDifference between GPU and CPU at pos 12 1.21206e-0701:27
PhilTilletDifference between GPU and CPU at pos 13 6.44709e-0801:27
PhilTilletDifference between GPU and CPU at pos 14 2.98222e-0801:27
PhilTilletTotal  error : 0.00000201:27
PhilTillet(sorry for the spam :D)01:28
@sonney2kn4nd0, true but then you should only set store module features to  true for one-vs-one01:28
@sonney2kPhilTillet, float again?01:28
PhilTilletno, double01:28
@sonney2kthat's too much :(01:28
PhilTilletah wait01:28
PhilTilletthere are some other differences01:28
PhilTilletconst float64_t rbf_width01:28
PhilTilleti used float not double for the width01:29
n4nd0sonney2k: if I have understood correctly, where it makes more sense would in line 185 here01:29
@sonney2kPhilTillet, well I guess you have to debug step by step01:29
n4nd0sonney2k: https://github.com/shogun-toolbox/shogun/blob/master/src/shogun/machine/MulticlassMachine.cpp01:29
@sonney2kfirst check if the kernel is the same01:29
n4nd0sonney2k: right after the training01:29
PhilTilletwill be super fun01:29
@sonney2kthen alpha*k(x,y)01:29
@sonney2kwell just for one element01:30
@sonney2kshould be easy01:30
@sonney2kn4nd0, yes exactly - try it and report if it works!01:31
-!- flxb [~cronor@g225030062.adsl.alicedsl.de] has quit [Quit: flxb]01:42
PhilTilletsonney2k, good news01:46
PhilTilletsort of01:46
PhilTilleti am using double everywhere01:46
PhilTilletthe error is about 1e-14 per example01:47
PhilTilletdon't think I can get lower01:47
PhilTilletbut now the improvement on the GPU is no longer *901:48
PhilTilletbut more like *301:48
PhilTillethad to reduce dim to 10001:51
PhilTillet(else, does not fit in memory01:51
@sonney2kn4nd0, did it work?01:51
@sonney2kPhilTillet, heh01:51
PhilTilletBenchmarking cpu...01:51
PhilTilletCPU Apply time : 3.1145101:51
PhilTilletCompiling GPU Kernels...01:51
PhilTilletBenchmarking GPUs..01:51
PhilTilletGPU Apply time : 0.6468501:51
PhilTilletTotal  error : 0.00000001:51
PhilTilletfor 100k examples of dim 10001:51
PhilTilletand 1000 SVs01:51
n4nd0sonney2k: not yet, idk why but now apply for one-vs-one seems not to output anything01:51
@sonney2kPhilTillet, so things seem OK - increasing precision reduces the error. I would have hoped for 1e-16 or 15 but ok01:53
@sonney2kmight be depending on width, summing order etc01:53
PhilTilletbut seems like double precision really reduces performances :p01:53
PhilTilletseems like graphic cards are still slower than cpus at doing double operations01:54
@sonney2kbut not soo much on CPU01:54
PhilTilleton my mobile GPU things are even worse01:54
@sonney2kI mean also CPU benefits from using floats01:54
PhilTilletyes but the difference between float and double on CPU does not seem so significant01:54
PhilTilleti mean there is no real performance drop down01:54
@sonney2kthere should be01:55
@sonney2kwe should be measuring memory speed01:55
@sonney2kso close to factor 2 speedup?!01:55
PhilTilleton my mobile GPU yes01:56
PhilTilleton high end GPU and high end CPU, more like factor 401:56
@sonney2kalso on CPU01:56
PhilTilletoh you mean for float compared to double01:56
PhilTilletbut another point is that I changed everything to double in GPU01:57
PhilTilleteven temporaries etc01:57
PhilTilleton CPU, some temporaries might be float01:57
@sonney2kPhilTillet, ahh ok it is using double internally - just not for kerne01:57
@sonney2kthat is why01:58
PhilTilletoh ok01:58
@sonney2kn4nd0, uhh01:58
PhilTilletso i should not use double everywhere in kernel too01:58
PhilTilletthat explains the difference error too then I guess01:58
PhilTilleti don't get it01:58
PhilTilletwhere does shogun use floating point temporaries?01:58
PhilTilletin kernels?01:59
@sonney2kPhilTillet, I meant that even when you use float on CPU it will use double for most compuations01:59
@sonney2kexcept for kernel01:59
PhilTilletoh I see01:59
@sonney2kPhilTillet, shogun never uses float32_t01:59
PhilTilletis the implementation multithreaded for CPUs?01:59
@sonney2kas temporaries01:59
@sonney2kyes multithreaded01:59
@sonney2kso load should be #cpus02:00
PhilTilletso *4 sounds realistic02:00
PhilTillet*9 was a bit unrealistic :p02:00
@sonney2kanyway most expensive is kernel compuation02:00
PhilTilletsame on GPU02:00
PhilTilletbut it is also where you can get most GFlops with caching02:00
n4nd0sonney2k: idk why yet, but doing m_machine->set_store_model_features(true) in MulticlassMachine.cpp:184 makes apply return nothing :S02:01
@sonney2kPhilTillet, maybe one can be more clever and compute k(x_i, x) for all x first and then go to x_j02:01
PhilTilletwell, on cpu you mean?02:01
@sonney2kahh but in your case examples fit in memory...02:01
PhilTilleton GPU it is somewhat treatedas a matrix-matrix product02:02
PhilTilletbut where operations are not inner product but norm_202:02
PhilTilletexp(-norm2/width) actually02:02
@sonney2kn4nd0, great :(02:02
@sonney2kn4nd0, I guess you need to debug what is going on (print out number of SVs / and values of alphas or so)02:03
PhilTilletcould maybe get a little increase by mapping features and stuff to texture cache, but it is not something I am used to do and I am not even sure about the gain that could be got02:03
@sonney2kn4nd0, if that doesn't work ask on the mailinglist - heiko wrote that code so he can help02:03
n4nd0sonney2k: ok, thank you - I am going to try to see now why does that happen, why it returns nothing in this case02:04
@sonney2kPhilTillet, well you should make sure all x_i are in GPU mem02:04
@sonney2kn4nd0, ok02:04
n4nd0sonney2k: in any case, what was the other possibility of doing one-vs-one without subsets?02:04
@sonney2kfor x ... not so important02:04
@sonney2kn4nd0, well craft new features / labels02:04
@sonney2kcould mean huge overhead I know...02:05
@sonney2kanyway I have to sleep now02:05
n4nd0good night, bye02:05
n4nd0it looks pretty cool your OpenGL stuff02:10
n4nd0that was for you PhilTillet ;)02:10
PhilTilletOpenCL you mean?02:10
n4nd0c'mon ... isn't kind of late? allow me some mistakes :P02:10
PhilTillethaha :D02:11
PhilTilletit's true02:11
PhilTilleti'm tired02:11
PhilTilletbut jet lag pwnd me...02:11
n4nd0haha I see02:11
n4nd0you said you were in a conference right?02:12
PhilTilletHigh Performance Computing conference02:13
n4nd0sounds interesting02:14
n4nd0I hope you learnt lot of new stuff02:14
PhilTilletwell to be honest02:14
PhilTilleti just didn't get most of the stuff02:14
n4nd0well ... honesty is a good feature! :P02:15
PhilTilleti mean, i was very lucky to get there02:16
PhilTilletit was not made for students02:16
PhilTilletthat's also why I didn't get most of the stuffs ^^02:16
PhilTilletbut well, was able to give my talk, that was why i was here :p02:16
n4nd0I hope it went good02:17
PhilTilletwell, more or less, i think it was fine... but you know at the end you're always like "oh i've forgotten to talk about that and that and that"02:17
n4nd0yeah, I know that feeling02:19
n4nd0good night people03:06
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving]03:07
PhilTilletgood night :)03:07
-!- vikram360 [~vikram360@] has quit [Ping timeout: 260 seconds]03:58
-!- vikram360 [~vikram360@] has joined #shogun05:13
-!- vikram360 [~vikram360@] has quit [Ping timeout: 248 seconds]05:22
-!- vikram360 [~vikram360@] has joined #shogun05:23
-!- Miggy [~piggy@] has quit [Ping timeout: 252 seconds]05:38
-!- PhilTillet [~Philippe@tillet-p42154.maisel.int-evry.fr] has left #shogun ["Leaving"]06:00
-!- vikram360 [~vikram360@] has quit [Ping timeout: 244 seconds]06:25
-!- av3ngr [av3ngr@nat/redhat/x-kwymmyssrqxjiwle] has joined #shogun07:53
-!- harshit_ [~harshit@] has joined #shogun09:16
-!- av3ngr [av3ngr@nat/redhat/x-kwymmyssrqxjiwle] has quit [Quit: That's all folks!]09:54
-!- harshit_ [~harshit@] has quit [Ping timeout: 260 seconds]10:27
-!- harshit_ [~harshit@] has joined #shogun10:30
-!- blackburn [~qdrgsm@] has joined #shogun10:39
-!- vikram360 [~vikram360@] has joined #shogun11:04
-!- flxb [~cronor@e177094239.adsl.alicedsl.de] has joined #shogun11:05
-!- gsomix [~gsomix@] has quit [Ping timeout: 248 seconds]11:06
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun11:12
-!- PhilTillet [~Philippe@npasserelle10.minet.net] has joined #shogun11:17
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Client Quit]11:17
-!- flxb_ [~cronor@e177094239.adsl.alicedsl.de] has joined #shogun11:18
-!- flxb [~cronor@e177094239.adsl.alicedsl.de] has quit [Ping timeout: 246 seconds]11:19
-!- flxb [~cronor@e178183060.adsl.alicedsl.de] has joined #shogun11:21
-!- flxb_ [~cronor@e177094239.adsl.alicedsl.de] has quit [Ping timeout: 246 seconds]11:22
-!- jckrz [~jacek@89-69-164-5.dynamic.chello.pl] has joined #shogun11:36
-!- harshit_ [~harshit@] has quit [Read error: Connection reset by peer]11:50
-!- gsomix [~gsomix@] has joined #shogun12:04
PhilTillethow are you blackburn ?12:16
blackburnPhilTillet: fine, what about you?12:17
-!- Marty28 [~Marty@] has joined #shogun12:21
PhilTilletblackburn, fine :)12:30
PhilTilletsolved most problems with OpenCL12:31
blackburnyeah I have seen you were discussing things yesterday12:33
-!- harshit_ [~harshit@] has joined #shogun12:35
PhilTilletStill there is a last problem i on precision, I asked Soeren about that12:39
PhilTilletfor computing norm(x-y) you do norm(x) + norm(y) - 2*inner_prod(x,y)12:39
PhilTilletin the gaussian kernel12:39
PhilTilletwhy? :)12:39
PhilTillet(I use inner_prod(x-y,x-y) so the round off error is different)12:40
blackburnPhilTillet: which is better in means of precision?12:41
PhilTilletI have absolutely no idea :D12:42
PhilTilletbut the two are different12:42
PhilTilletthe second one might be faster to compute though12:42
blackburnPhilTillet: well you may precompute norm things right?12:42
PhilTilletwhat do you mean?12:45
PhilTilletI don't precompute anything12:46
blackburnI mean dot(x-y,x-y) can be partially precomputed12:47
PhilTilletwell I kinda do a for loop with "diff = x[i] - y[i] ; sum+=pow(diff,2);"12:48
blackburnso what is faster12:49
blackburnto compute pow'ed differences12:49
blackburnor to compute dot product?12:49
PhilTilletto compute pow'ed difference for sure, but from a rounding error point of view I have no idea which is more precise :p12:51
-!- gsomix [~gsomix@] has quit [Ping timeout: 246 seconds]12:52
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun12:54
n4nd0blackburn: hey there!13:02
Marty28Hi! Do you know where I can find some relatively new ***Python*** scripts on computing and visualizing POIMs? (http://www.fml.tuebingen.mpg.de/raetsch/projects/POIM)13:04
n4nd0Marty28: no idea :(13:05
Marty28I guess I have to ask Gunnar R?tsch.13:06
flxbWhy is it that in python I have to import KernelRidgeRegression and in the online documentation the class is called KRR? Is the documentation outdated?13:42
n4nd0flxb: is in the online doc for python or for another language?13:45
n4nd0flxb: naming changes among languages13:45
flxbn4nd0: i was looking at the c++ classes. where can i find the online doc for python?13:46
n4nd0flxb: I don't think there is doc for python13:46
n4nd0flxb: we only have doxygen doc extracted from C++ source code13:47
n4nd0flxb: but you should be able to use help() in python13:47
n4nd0flxb: and if you have a problem like this, that you do know the name in C++ but not in python, I suggest you to check the file13:48
flxbn4nd0: i was always curious about that. why is there no documentation on how to use shogun in python?13:48
n4nd0flxb: well, I think that if you issue help you will find quite a bit of info as well13:49
n4nd0but yes, more documentation would be quite useful probably13:50
n4nd0anyway, I think that if you read C++ doc/check the code it is normally enough to get the point of everything13:50
n4nd0and don't forget the examples!13:51
-!- harshit_ [~harshit@] has quit [Ping timeout: 246 seconds]13:51
flxbn4nd0: as far as i know the python modular examples never work13:55
n4nd0flxb: that's weird13:55
n4nd0flxb: did you configure with support for python_modular?13:55
flxbn4nd0: yes it works fine. but i have to import modshogun, not shogun13:56
n4nd0flxb: I didn't get it, do the examples work on your machine or not?13:57
flxbn4nd0: If I update all imports they would work, I guess.13:57
n4nd0flxb: I don't it is normal that you have to do that13:58
flxbI don't know if it is just a problem on my machine, but i cannot import shogun. i have to import modshogun.13:58
n4nd0no problem in my machine importing shogun13:58
n4nd0and the examples work fine13:59
flxbn4nd0: ah, i think it is because i don't make install. i just compile and add src/interfaces/python_modular to my python path14:02
n4nd0yes, try to do sudo make install as well14:03
-!- harshit_ [~harshit@] has joined #shogun14:18
-!- jckrz [~jacek@89-69-164-5.dynamic.chello.pl] has quit [Quit: Ex-Chat]14:30
-!- PhilTillet [~Philippe@npasserelle10.minet.net] has quit [Ping timeout: 244 seconds]14:55
flxbn4nd0: how do you install when you can't sudo?14:56
n4nd0flxb: can you make install?14:57
flxbn4nd0: no it tries to alter permissions on /usr/lib15:00
n4nd0flxb: hmm I don't know then :S15:01
flxbn4nd0: this isn't a big problem. i'll just use modshogun15:01
n4nd0flxb: I guess that there must be an option to just install for the user you have permissions15:01
-!- blackburn_ [50ea622b@gateway/web/freenode/ip.] has joined #shogun15:04
-!- blackburn_ [50ea622b@gateway/web/freenode/ip.] has quit [Quit: Page closed]15:37
-!- gsomix [~gsomix@] has joined #shogun15:38
-!- PhilTillet [~Philippe@npasserelle10.minet.net] has joined #shogun15:41
-!- harshit_ [~harshit@] has quit [Read error: Connection reset by peer]15:57
-!- PhilTillet [~Philippe@npasserelle10.minet.net] has quit [Ping timeout: 244 seconds]15:59
-!- PhilTillet [~Philippe@npasserelle10.minet.net] has joined #shogun15:59
-!- gsomix [~gsomix@] has quit [Quit: ????? ? ?? ??? (xchat 2.4.5 ??? ??????)]16:02
-!- gsomix [~gsomix@] has joined #shogun16:02
-!- PhilTillet [~Philippe@npasserelle10.minet.net] has quit [Ping timeout: 244 seconds]16:07
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 245 seconds]16:15
-!- harshit_ [~harshit@] has joined #shogun16:32
-!- gsomix [~gsomix@] has quit [Ping timeout: 252 seconds]16:55
-!- gsomix [~gsomix@] has joined #shogun17:04
Marty28void prepare_POIM2 ( float64_t *  distrib,17:10
Marty28int32_t  num_sym,17:10
Marty28int32_t  num_feat17:10
Marty28Here I need a float64_t *17:10
Marty28numpy.ones((4,4)) gives me a numpy.ndarray17:12
Marty28Can I convert the matrix somehow to fit into the float_64_t?17:12
Marty28I try using the script poim.py from http://people.tuebingen.mpg.de/vipin/www.fml.tuebingen.mpg.de/raetsch//projects/easysvm/17:13
Marty28I get the ERROR: "TypeError: in method 'WeightedDegreePositionStringKernel_prepare_POIM2', argument 2 of type 'float64_t *'"17:13
-!- flxb [~cronor@e178183060.adsl.alicedsl.de] has quit [Read error: Connection reset by peer]17:19
-!- flxb [~cronor@e178183060.adsl.alicedsl.de] has joined #shogun17:30
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun17:33
-!- romovpa [bc2c2ad0@gateway/web/freenode/ip.] has joined #shogun17:54
-!- romovpa_ [bc2c2ad0@gateway/web/freenode/ip.] has joined #shogun17:59
-!- romovpa [bc2c2ad0@gateway/web/freenode/ip.] has quit [Ping timeout: 245 seconds]18:00
gsomixsonney2k, hey18:02
-!- Marty28 [~Marty@] has quit [Quit: ChatZilla [Firefox 11.0/20120310010446]]18:02
-!- romovpa_ [bc2c2ad0@gateway/web/freenode/ip.] has quit [Client Quit]18:03
-!- harshit_ [~harshit@] has quit [Ping timeout: 260 seconds]18:05
-!- Marty28 [~Marty@] has joined #shogun18:35
-!- romi_ [~mizobe@] has quit [Ping timeout: 246 seconds]18:37
-!- romi_ [~mizobe@] has joined #shogun18:38
-!- gsomix [~gsomix@] has quit [Ping timeout: 260 seconds]18:42
-!- gsomix [~gsomix@] has joined #shogun18:43
-!- romi_ [~mizobe@] has quit [Ping timeout: 246 seconds]18:47
-!- nickon [d5774105@gateway/web/freenode/ip.] has joined #shogun19:09
nickongood evening, everyone (for me at least)19:10
-!- romi_ [~mizobe@] has joined #shogun19:12
-!- harshit_ [~harshit@] has joined #shogun19:13
n4nd0nickon: hey!19:19
nickonhey n4nd0 :)19:21
nickonhow are you ?19:21
nickonare you a developer at the shogun organisation ?19:21
n4nd0nickon: I have done some contributions to the project19:21
nickonwhat did you work on then ? :)19:22
n4nd0nickon: and I am fine thanks, what about you?19:22
nickonI'm fine too, thanx :)19:22
n4nd0nickon: small things here and there :)19:23
n4nd0nickon: https://github.com/shogun-toolbox/shogun/commit/afb8b5eb7058640949cc7774d09b8756a66a5ada you can check them here if interested19:23
nickonjust nice to get an idea on who is who and who is working on what :)19:23
n4nd0yeah sure19:24
n4nd0now I am working on a dimensionality reduction algo19:24
nickonwhat kind of method are you implementing ?19:25
-!- PhilTillet [~Philippe@tillet-p42154.maisel.int-evry.fr] has joined #shogun19:25
nickonmy thesis is somehow related to dimensionality reduction (tensor compression, so not exactly in the context of machine learning ;)19:26
n4nd0Stochastic Proximity Embedding19:26
n4nd0aham, are you working on it now?19:27
nickonwell, not right now ;)19:27
nickonbut yes, during this academic year I've been busy on that19:27
-!- PhilTillet [~Philippe@tillet-p42154.maisel.int-evry.fr] has quit [Client Quit]19:30
nickonAt the moment I'm actually interested in what shogun offers for GSoC (as I'm still a student :))19:30
nickonthe C5.0 integration project seems to get my attention, can't really explain why19:30
-!- harshit_ [~harshit@] has quit [Ping timeout: 260 seconds]19:31
n4nd0ok, nice19:34
nickonWhere are you from actually, if I may ask?19:34
-!- PhilTillet [~Philippe@tillet-p42154.maisel.int-evry.fr] has joined #shogun19:35
-!- harshit_ [~harshit@] has joined #shogun19:37
gsomixnickon, hi. if you are interested to know, I'm working on python3 interface and covertree integration now.19:37
-!- harshit_ [~harshit@] has quit [Client Quit]19:37
n4nd0I come from Spain, what about you?19:37
-!- harshit__ [~harshit@] has joined #shogun19:38
-!- harshit__ [~harshit@] has quit [Client Quit]19:38
-!- harshit_ [~harshit@] has joined #shogun19:38
harshit_hey n4nd0.que tal !19:42
nickonI'm from Belgium :)19:45
nickonsorry I gotta go eat!19:45
nickonsee ya19:45
n4nd0harshit_: buenas! bien y t??? :D19:46
harshit_n4nd0:good :)19:48
harshit_hey do you knw what does SG_ADD() does exactly ?19:50
n4nd0harshit_: let me check19:52
n4nd0mmm not exactly19:52
n4nd0I remember S?ren told me that m_parameters->add is used for things like serialization19:53
n4nd0but Heiko said that SG_ADD must be done in order to be able to use model selection19:53
n4nd0so I guess it is a kind of method to save some info about the class members19:54
n4nd0but actually no idea what it exactly does19:54
-!- romovpa [b064f6fe@gateway/web/freenode/ip.] has joined #shogun19:59
harshit_got it ..20:03
harshit_thanks :)20:04
romovpaHi all. I've found that saving the model isn't implemented for LinearMachine and many other machines. I would like to implement the missing save/load methods. Is it a good idea?20:15
blackburnromovpa: do you mean serialization?20:18
blackburnromovpa: btw do you know vojtech franc?20:18
blackburnharshit_: SG_ADD is m_parameters->add20:18
romovpayes, I mean20:18
blackburnn4nd0: did you solve your issue?20:19
blackburnromovpa: no, that's not needed it works already20:19
romovpablackburn: (about Vojtech Franc) no, I have wrote him, but haven't receive any answer20:20
blackburnaha I see20:20
romovpablackburn: hmm... why I couldn't save my liblinear model through cmdline interface?20:21
blackburnoh you use this stuff..20:21
romovpablackburn: =)20:21
blackburnwhy cmdline?20:21
blackburnanyway let me check how it works20:22
n4nd0blackburn: hey! you talking about spe or o-vs-o?20:23
blackburnn4nd0: both probably20:23
n4nd0blackburn: ah, for o-vs-o I am waiting for Heiko's answer, spe is still on progress :)20:23
blackburnok I see20:24
blackburnfeel free to ask if you need help20:24
n4nd0thank you :)20:24
n4nd0blackburn: a theoretical thing, embedding == DR method?20:27
blackburnn4nd0: embedding is a process of R^n -> R^t so yes20:28
n4nd0blackburn: aham, I understand that they are synonyms then20:28
Marty28python3? nice20:31
blackburnromovpa: ok that makes sense probably but my advice is to use modular interface20:32
romovpablackburn: Could you explain me, what is the right way of saving trained model? (Using libshogun and c++) Are CMachine.save()/load() methods deprecated?20:32
blackburnuse save_serializable20:33
blackburnromovpa: actually I do not know whether it should work right in static right now20:34
blackburnbut anyway save_serializable does the job20:34
romovpablackburn: I'm trying to use many interfaces and libshogun directly, thats ok =)20:35
blackburnromovpa: main issue of static is we have to update it *manually*20:36
romovpaok, thanks20:36
blackburneach time we add new feature we actually should update it20:36
blackburnbut actually we don't20:36
blackburnso it becomes outdated day-by-day hah20:36
romovpablackburn: documentation seems to have a similar problem20:45
blackburnoh yes we hear this thing everyday20:46
harshit_blackburn: I have few questions regarding the liblinear/ocas and lbp features project .. can you help me out ?20:57
-!- romovpa [b064f6fe@gateway/web/freenode/ip.] has quit [Ping timeout: 245 seconds]21:00
blackburnharshit_: yes21:06
blackburnone warning - I believe it is not really actual21:06
blackburnliblinear and ocas are up-to-date21:06
-!- harshit_ [~harshit@] has quit [Ping timeout: 272 seconds]21:08
-!- nickon [d5774105@gateway/web/freenode/ip.] has quit [Quit: Page closed]21:18
-!- harshit_ [~harshit@] has joined #shogun21:18
harshit_blackburn: only thing i noticed that is not in shogun is the multi class svm21:19
harshit_of liblinear21:19
blackburnyes it is in21:19
harshit_you mean multiclass svm of liblinear, is now in shogun .?21:21
blackburnyes, MulticlassLiblinear21:21
harshit_but in liblinear.cpp, Under MCSVM_CS case nothing is there except SG_NOTIMPEMENTED21:23
blackburnyou probably have outdated shogun21:24
harshit_actually i was checking out in documentation21:24
blackburnsource on site is not up-to-date for sure21:25
harshit_ohk, So in the project i was considering for gsoc, Actually nothing is left in it except lbp features21:26
blackburnyeah probably :)21:26
blackburnI believe it is not a problem21:27
blackburnthere is a week still and nobody wants you to start with project before proposals deadline21:27
blackburne.g. n4nd0 contributes with a variety of patches and it is actually ok21:28
harshit_ok , so could you tell me about decision trees21:31
harshit_I mean I remember you saying that21:31
harshit_its quite hard to implement21:31
harshit_why so ?21:31
blackburnno, not really21:31
blackburnI think summer would be enough to adapt it for shogun21:31
harshit_so C5.0 + lbp features will make a nice project !21:33
blackburnmakes sense21:33
harshit_and regarding lbp features : i think the implementation of openCV is quite nice21:33
blackburnthat needs some discussion21:33
harshit_but i wonder why is the liblinear/ocas project is in ideas list when it is updated21:34
blackburnactually I believe opencv features would be better21:34
blackburnyes probably this one should be removed21:35
harshit_I think liblinear is going to release there regression feature this summer, So it might be bcoz of that it is there .21:36
harshit_any thoughts on that ?21:37
blackburnI have no idea actually21:37
harshit_Actually i am not able to chat with sonney2k, Time lag :(21:39
harshit_so I am having problem in refining my project21:39
blackburnif you are sure liblinear will release you may add this to project21:40
-!- romi_ [~mizobe@] has quit [Ping timeout: 246 seconds]21:40
harshit_I think soeren is involved with liblinear too, So he will probably have a better idea abt release !21:42
blackburnno actually he is not involved with liblinear21:50
blackburnso what are ideas you want to apply for?21:50
n4nd0blackburn: so one thing about SPE and using KNN with cover tree21:52
blackburnaha shoot21:52
-!- romi_ [~mizobe@] has joined #shogun21:52
n4nd0blackburn: so I see that I have to use the same that is in LLE, as you told me21:53
blackburnyou don't have to but it has best performance21:53
n4nd0blackburn: but to copy the code there directly is not a very nice solution21:53
n4nd0blackburn: or is it ok like that?21:53
blackburnhmm you may inherit from isomap21:54
blackburnbut now really better21:54
n4nd0not to have the same code in several places ...21:54
harshit_blackburn: Actually saw his name in liblinear paper,21:54
blackburnharshit_: are you sure?21:54
blackburnn4nd0: ok let me describe my plan on libedrt21:54
n4nd0blackburn: ok21:54
blackburnactually I plan to extract things into half-standalone libedrt21:54
blackburnso it is ok, I will just reuse one function in solid library21:55
blackburnit would take a while to adapt your code21:55
harshit_yeah , check this out :http://www.csie.ntu.edu.tw/~cjlin/papers/liblinear.pdf21:55
n4nd0blackburn: ok, I will just commetn that is taken from there21:55
n4nd0blackburn: c'mon, I will try to do it good so it is not that much to adapt :P21:55
harshit_and saw his name in libocas too :)21:56
blackburnharshit_: he was action editor there21:56
blackburnno involvement with project itself21:56
blackburnhe is an author of libocas21:56
blackburnas well as v. franc21:56
harshit_oh gr821:57
blackburnn4nd0: would require some adaptations anyway21:57
blackburnjust copy the code21:57
harshit_blackburn : I think C5.0 and lbp features will make a good project for me in gsoc, but the problem is that i dont have much knowledge about c5.022:02
blackburnn4nd0: that code is duplicated in isomap already22:03
harshit_except some knowledge about decision trees and pruning in general22:03
n4nd0blackburn: ok22:03
n4nd0harshit_: in my opinion, you shouldn't probably be scared about that :) I'm sure it's sth you are able to pick up and implement22:04
harshit_thanks for boosting my confidence n4nd0, I think finally i'll go with that project only but just need some research before that22:06
blackburnn4nd0: actually I would suggest you to work on libedrt directly if it was already in shogun22:08
blackburnbut Soeren didn't like it and it is not ready so it is in separate branch yet22:09
n4nd0blackburn: I saw your commit proof of concept, but you didn't like it that much at the end, did you?22:10
n4nd0oh yeah, I remember whe you guys talked about that22:11
blackburnn4nd0: yes but I still think I should get this done22:11
harshit_hey , Is there any reference on how cross validation is implemented in shogun .22:13
blackburnI am afraid Heiko is the only reference for that :D22:14
harshit_okay, btw whats the nickname of heiko here ?22:15
blackburn'heiko' as his name usually22:16
n4nd0karlnapf too?22:16
blackburnah yes22:17
blackburnI got wrong probably22:17
harshit_havnt seen him often !22:17
blackburnkarlnapf is more usual22:17
harshit_blackburn : there is a project named " Integrate SGD-QN: in ideas of 2011,that actually seems to be easy . should I work on it for now .22:19
harshit_as in, before gsoc22:20
-!- blackburn1 [~qdrgsm@] has joined #shogun22:22
-!- blackburn [~qdrgsm@] has quit [Ping timeout: 246 seconds]22:23
-!- blackburn1 is now known as blackburn22:24
harshit_It is in shogun :(22:25
blackburnwhat is in shogun?22:26
harshit_i was just going through 2011 ideas list, to get something cool to work on now22:28
blackburnah yes22:28
-!- vikram360 [~vikram360@] has quit [Ping timeout: 276 seconds]22:32
-!- vikram360 [~vikram360@] has joined #shogun22:33
n4nd0blackburn: this macro that is used around in DR methods22:51
n4nd0I guess there must be an ifdef around there to activate/desactivate it right22:51
blackburnn4nd0: no, why?22:51
n4nd0blackburn: oh, that's just what I expected22:52
n4nd0blackburn: so it is just like SG_PRINT, it will always appear on screen?22:52
blackburnit prints nothing if loglevel is above than MSG_DEBUG22:52
-!- harshit_ [~harshit@] has quit [Remote host closed the connection]23:10
-!- romovpa_ [b064f6fe@gateway/web/freenode/ip.] has joined #shogun23:44
-!- gsomix [~gsomix@] has quit [Quit: ????? ? ?? ??? (xchat 2.4.5 ??? ??????)]23:49
-!- gsomix [~gsomix@] has joined #shogun23:49
romovpa_could somebody explain to me what stands for an abbreviation PLIF?23:51
blackburnromovpa_: piecewise linear function23:51
-!- Vuvu [~Vivan_Ric@] has joined #shogun23:54
-!- blackburn [~qdrgsm@] has quit [Ping timeout: 246 seconds]23:59
--- Log closed Sun Apr 01 00:00:19 2012